Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then...

119
Rekognition Custom Labels Guide

Transcript of Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then...

Page 1: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

RekognitionCustom Labels Guide

Page 2: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

Rekognition: Custom Labels GuideCopyright © 2020 Amazon Web Services, Inc. and/or its affiliates. All rights reserved.

Amazon's trademarks and trade dress may not be used in connection with any product or service that is notAmazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages ordiscredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who mayor may not be affiliated with, connected to, or sponsored by Amazon.

Page 3: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

Table of ContentsWhat Is Amazon Rekognition Custom Labels? ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Key Benefits .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Are You a First-Time Amazon Rekognition Custom Labels User? ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Choosing to Use Amazon Rekognition Custom Labels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Amazon Rekognition Label Detection .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Amazon Rekognition Custom Labels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Terminology .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Assets ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Object Detection .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Confidence Score .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Evaluation .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Image Label ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Bounding Box Label ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Model Performance Metrics ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Training .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Testing .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Setting Up Amazon Rekognition Custom Labels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Step 1: Create an AWS Account .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Step 2: Create an IAM Administrator User and Group .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Step 3: Set Up the AWS CLI and AWS SDKs .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Step 4: Set Up Amazon S3 Bucket Permissions for SDK Use .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Decrypting Files Encrypted with AWS Key Management Service .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Step 5: Set Up Console Permissions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Console Access .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10External Amazon S3 Buckets ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Getting Started with Amazon Rekognition Custom Labels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Prepare your Images .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Create a Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Create a Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Create a Test Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Train Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Evaluate Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Use Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Improve the Performance of Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Getting Started (Console) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Step 1: Prepare Your Images .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Step 2: Create Your First Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Step 3: Create a Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Step 4: Create Labels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Step 5: Identify Objects, Scenes and Concepts with Image-Level Labels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Step 6: Locate Objects with Bounding Boxes .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Step 7: Train Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Step 8: Evaluate Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Step 9: Start Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Step 10: Analyze Images with Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Step 11: Stop Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Getting Started (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Creating a Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Creating a Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Training and Evaluating a Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Running and Using a Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Step 1: Prepare Your Images .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Step 2: Create a Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

iii

Page 4: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

Step 3: Prepare Your Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Step 4: Train Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Step 5: Evaluate Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Step 6: Start Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Step 7: Analyze Images with Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Step 8: Stop Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Managing a Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Creating a Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Creating a Project (Console) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Creating a Project (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Deleting a Project ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Deleting a Project (Console) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Deleting a Project (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Managing a Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Prepare Your Images .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Create a Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Review Your Images .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Preparing Images .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Image Format .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Input Image Recommendations .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Image Set Size .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Creating a Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Purposing Datasets ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Amazon S3 Bucket .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Local Computer ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Amazon SageMaker Ground Truth .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Existing Dataset ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Creating a Manifest File ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Image-Level Labels in Manifest Files ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Object Localization in Manifest Files ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Validation Rules for Manifest Files ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Editing Labels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Assigning Image-Level Labels to an Image .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Drawing Bounding Boxes .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Managing a Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Training a Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Training a Model (Console) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Training a Model (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Evaluating a Trained Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Metrics for Evaluating Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Accessing Training Results (Console) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Accessing Training Results (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Improving a Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Data .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Reducing False Positives (Better Precision) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85Reducing False Negatives (Better Recall) .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Running a Trained Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Starting or Stopping a Model (Console) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Starting a Model (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87Stopping a Model (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Deleting a Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Deleting a Model (Console) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Deleting a Model (SDK) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Analyzing an Image .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96DetectCustomLabels Operation Request ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101DetectCustomLabels Operation Response .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Examples .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

iv

Page 5: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

Model Feedback Solution .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103Custom Labels Demonstration .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103Video Analysis ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104Creating an AWS Lambda Function .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Security ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108Securing Amazon Rekognition Custom Labels Projects ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108Securing DetectCustomLabels ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Limits ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Training .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Testing .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Detection .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

API Reference .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Training Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Using Your Model ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Document History .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113AWS glossary .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

v

Page 6: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideKey Benefits

What Is Amazon Rekognition CustomLabels?

With Amazon Rekognition Custom Labels, you can identify the objects and scenes in images that arespecific to your business needs. For example, you can find your logo in social media posts, identify yourproducts on store shelves, classify machine parts in an assembly line, distinguish healthy and infectedplants, or detect animated characters in videos.

Developing a custom model to analyze images is a significant undertaking that requires time, expertise,and resources. It often takes months to complete. Additionally, it can require thousands or tens ofthousands of hand-labeled images to provide the model with enough data to accurately make decisions.Generating this data can take months to gather, and can require large teams of labelers to prepare it foruse in machine learning.

Amazon Rekognition Custom Labels builds off of Amazon Rekognition’s existing capabilities, which arealready trained on tens of millions of images across many categories. Instead of thousands of images,you can upload a small set of training images (typically a few hundred images or less) that are specificto your use case. You can do this by using the easy-to-use console. If your images are already labeled,Amazon Rekognition Custom Labels can begin training a model in a short time. If not, you can label theimages directly within the labeling interface, or you can use Amazon SageMaker Ground Truth to labelthem for you.

After Amazon Rekognition Custom Labels begins training from your image set, it can produce a customimage analysis model for you in just a few hours. Behind the scenes, Amazon Rekognition Custom Labelsautomatically loads and inspects the training data, selects the right machine learning algorithms, trainsa model, and provides model performance metrics. You can then use your custom model through theAmazon Rekognition Custom Labels API and integrate it into your applications.

Key BenefitsSimplified data labeling

The Amazon Rekognition Custom Labels console provides a visual interface to make labeling your imagesfast and simple. The interface allows you to apply a label to the entire image. You can also identify andlabel specific objects in images using bounding boxes with a click-and-drag interface. Alternately, if youhave a large dataset, you can use Amazon SageMaker Ground Truth to efficiently label your images atscale.

Automated machine learning

No machine learning expertise is required to build your custom model. Amazon Rekognition CustomLabels includes automated machine learning (AutoML) capabilities that take care of the machine learningfor you. When the training images are provided, Amazon Rekognition Custom Labels can automaticallyload and inspect the data, select the right machine learning algorithms, train a model, and providemodel performance metrics.

Simplified model evaluation, inference, and feedback

You evaluate your custom model’s performance on your test set. For every image in the test set, you cansee the side-by-side comparison of the model’s prediction vs. the label assigned. You can also reviewdetailed performance metrics such as precision, recall, F1 scores, and confidence scores. You can start

1

Page 7: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAre You a First-Time Amazon

Rekognition Custom Labels User?

using your model immediately for image analysis, or you can iterate and retrain new versions with moreimages to improve performance. After you start using your model, you track your predictions, correct anymistakes, and use the feedback data to retrain new model versions and improve performance.

Are You a First-Time Amazon Rekognition CustomLabels User?

If you're a first-time user of Amazon Rekognition Custom Labels, we recommend that you read thefollowing sections in order:

1. Setting Up Amazon Rekognition Custom Labels (p. 6) – In this section, you set your accountdetails.

2. Getting Started with Amazon Rekognition Custom Labels (p. 12) – In this section, you create yourfirst Amazon Rekognition Custom Labels custom model.

Choosing to Use Amazon Rekognition CustomLabels

You can use Amazon Rekognition Custom Labels to find objects, scenes, and concepts in images by usinga machine learning model that you create.

NoteAmazon Rekognition Custom Labels is not designed for analyzing faces, detecting text,or finding unsafe image content in images. To perform these tasks, you can use AmazonRekognition Image. For more information, see What Is Amazon Rekognition.

Amazon Rekognition Label DetectionCustomers across media and entertainment, advertising and marketing, industrial, agricultural, and othersegments need to identify, classify, and search for important objects, scenes, and other concepts in theirimages and videos—at scale. Amazon Rekognition's label detection feature makes it fast and easy todetect thousands of common objects (such as cars and trucks, corn and tomatoes, and basketballs andsoccer balls) within images and video by using computer vision and machine learning technologies.

However, you can't find specialized objects using Amazon Rekognition label detection. For example,sports leagues want to identify team logos on player jerseys and helmets during game footage, medianetworks would like to detect specific sponsor logos to report on advertiser coverage, manufacturersneed to distinguish between specific machine parts or products in an assembly line to monitor quality,and other customers want to identify cartoon characters in a media library, or locate products of aspecific brand on retail shelves. Amazon Rekognition labels don't help you detect these specializedobjects.

Amazon Rekognition Custom LabelsYou can use Amazon Rekognition Custom Labels to find objects and scenes that are unique to yourbusiness needs. Amazon Rekognition Custom Labels supports use cases such as logos, objects, andscenes. You can use it to perform image classification (image level predictions) or detection (object/bounding box level predictions).

For example, while you can find plants and leaves by using Amazon Rekognition label detection, youneed Amazon Rekognition Custom Labels to distinguish between healthy, damaged, and infected plants.

2

Page 8: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTerminology

Similarly, Amazon Rekognition label detection can identify images with machine parts. However, toidentify specific machine parts such as a turbocharger or a torque converter, you need to use AmazonRekognition Custom Labels. The following are examples of how you can use Amazon RekognitionCustom Labels.

• Detect logos• Find animation characters• Find your products• Identify machine parts• Classify agricultural produce quality (such as rotten, ripe, or raw)

Amazon Rekognition Custom Labels and MachineLearning Terminology

The following are common terms used in this documentation.

Topics• Assets (p. 3)• Object Detection (p. 3)• Confidence Score (p. 4)• Dataset (p. 4)• Evaluation (p. 4)• Image Label (p. 4)• Bounding Box Label (p. 4)• Model Performance Metrics (p. 4)• Training (p. 5)• Testing (p. 5)

AssetsThe following are common terms used in this documentation.

Assets are the images that you use for training and testing an Amazon Rekognition Custom Labelsmodel.

Object DetectionObject detection detects a known object and its location on an image. For example, an image mightcontain several different types of dogs. A model trained to detect Border Collie dogs is able to detect aBorder Collie and its location among the rest of the dogs in the image.

To train a model, an image needs to be annotated with a label for the desired object, and the location ofthe object on the image. You mark the location of the object by using a bounding box. A bounding box isa set of coordinates that isolate an object on an image.

Amazon Rekognition provides a single API, DetectCustomLabels, to get predictions from a model.The operation returns classification and object detection information in the results of a single call. Forexample, the following JSON shows the label name (class) and the bounding box (location) informationfor a house.

3

Page 9: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideConfidence Score

{ "CustomLabels": [ { "BoundingBox": { "Width": 0.053907789289951324, "Top": 0.08913730084896088, "Left": 0.11085548996925354, "Height": 0.013171200640499592 }, "Name: "House", "Confidence": 99.56285858154297 }, ]}

Confidence ScoreAmazon Rekognition Custom Labels helps you train a custom model that makes a prediction about thepresence of each custom label (that you specify during training) in an image. For each prediction, thecustom model returns a confidence score, which is a number between 0 and 100 that indicates howconfident Amazon Rekognition Custom Labels is in the presence of that label (and the bounding boxlocation of the object). A lower value represents lower confidence.

In general, as confidence score increases, the algorithm overall makes more correct predictions, butreturns fewer results. At lower confidence values, the algorithm makes more mistakes, but returnsmore results. You can use the confidence score to sort your predictions for an individual label, and set athreshold value that applies to your use case.

DatasetA dataset is a collection of images and associated labels. For each image in a dataset, one or moreassociated labels identify objects, scenes, or concepts in the image. Datasets are used to train and test anAmazon Rekognition Custom Labels model.

EvaluationWhen a training is complete, Amazon Rekognition Custom Labels uses a test dataset to measure, orevaluate, the quality of the trained model. The results of evaluation are expressed as metrics that youcan use as guidance to further improve the performance of the model.

Image LabelAn image label specifies whether a scene (such as a wedding or a beach), or an object (such as a dog or acar), is present in that image.

Bounding Box LabelA bounding box label specifies labels that are localized within an image and are indicated by aspecific polygon (a bounding box with coordinates). Typically, these are physical objects that are welldifferentiated and easily visible—for example, dogs, cats, tables, or machine parts.

Model Performance MetricsMetrics are values that Amazon Rekognition Custom Labels provides after testing. You use metrics toimprove the performance of your model.

4

Page 10: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining

The Amazon Rekognition Custom Labels console provides metrics for the overall performance of a modeland metrics for individual labels. The Amazon Rekognition Custom Labels console provides metrics forF1 score, precision, and recall. For more information, see Metrics for Evaluating Your Model (p. 74).The Amazon Rekognition Custom Labels API provides other metrics. For more information, see AccessingAmazon Rekognition Custom Labels Training Results (SDK) (p. 77).

TrainingTraining is the process of creating a model to predict custom labels in images. Internally, AmazonRekognition Custom Labels uses machine learning algorithms to train the model using a dataset oftraining images that you provide. Model training requires training images that accurately represent theimage dataset that you want to make model predictions on. Every time you train a model, a new versionof the model is created.

TestingTesting is the process of determining how well a trained model performs on individual labels and images,and as an aggregate. You test a trained model by using a test dataset of labeled images. A test datasetshould have images that represent the scenarios and environments in which the predictions are requiredto be made.

5

Page 11: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 1: Create an AWS Account

Setting Up Amazon RekognitionCustom Labels

In this section, you sign up for an AWS account and then create an IAM user, a security group, andoptionally set up access to external Amazon S3 buckets used by the Amazon Rekognition Custom Labelsconsole and SDK.

For information about the AWS Regions that support Amazon Rekognition Custom Labels, see AmazonRekognition Endpoints and Quotas.

Topics• Step 1: Create an AWS Account (p. 6)• Step 2: Create an IAM Administrator User and Group (p. 6)• Step 2: Set Up the AWS CLI and AWS SDKs (p. 7)• Step 4: Set Up Amazon S3 Bucket Permissions for SDK Use (p. 8)• Step 5: Set Up Amazon Rekognition Custom Labels Console Permissions (p. 10)

Step 1: Create an AWS AccountIn this section, you sign up for an AWS account. If you already have an AWS account, skip this step.

When you sign up for Amazon Web Services (AWS), your AWS account is automatically signed up for allAWS services, including IAM. You are charged only for the services that you use.

To create an AWS account

1. Open https://portal.aws.amazon.com/billing/signup.2. Follow the online instructions.

Part of the sign-up procedure involves receiving a phone call and entering a verification code on thephone keypad.

Write down your AWS account ID because you'll need it for the next task.

Step 2: Create an IAM Administrator User andGroup

When you create an AWS account, you get a single sign-in identity that has complete access to all of theAWS services and resources in the account. This identity is called the AWS account root user. Signing into the AWS Management Console by using the email address and password that you used to create theaccount gives you complete access to all of the AWS resources in your account.

We strongly recommend that you do not use the root user for everyday tasks, even the administrativeones. Instead, adhere to the best practice in Create Individual IAM Users, which is to create an AWSIdentity and Access Management (IAM) administrator user. Then securely lock away the root usercredentials, and use them to perform only a few account and service management tasks.

6

Page 12: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 3: Set Up the AWS CLI and AWS SDKs

To create an administrator user and sign in to the console

1. Create an administrator user in your AWS account. For instructions, see Creating Your First IAM Userand Administrators Group in the IAM User Guide.

NoteAn IAM user with administrator permissions has unrestricted access to the AWS services inyour account. You can restrict permissions as necessary. The code examples in this guideassume that you have a user with the AmazonRekognitionFullAccess permissions. Youalso have to provide permissions to access the console. For more information, see Step 5:Set Up Amazon Rekognition Custom Labels Console Permissions (p. 10).

2. Sign in to the AWS Management Console.

To sign in to the AWS Management Console as a IAM user, you must use a special URL. For moreinformation, see How Users Sign In to Your Account in the IAM User Guide.

Step 2: Set Up the AWS CLI and AWS SDKsThe following steps show you how to install the AWS Command Line Interface (AWS CLI) and AWS SDKsthat the examples in this documentation use. There are a number of different ways to authenticate AWSSDK calls. The examples in this guide assume that you're using a default credentials profile for callingAWS CLI commands and AWS SDK API operations.

For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services GeneralReference.

Follow the steps to download and configure the AWS SDKs.

To set up the AWS CLI and the AWS SDKs

1. Download and install the AWS CLI and the AWS SDKs that you want to use. This guide providesexamples for the AWS CLI, Java, and Python. For information about installing AWS SDKs, see Toolsfor Amazon Web Services.

2. Create an access key for the user you created in Step 2: Create an IAM Administrator User andGroup (p. 6).

a. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

b. In the navigation pane, choose Users.c. Choose the name of the user you created in Step 2: Create an IAM Administrator User and

Group (p. 6).d. Choose the Security credentials tab.e. Choose Create access key. Then choose Download .csv file to save the access key ID and secret

access key to a CSV file on your computer. Store the file in a secure location. You will not haveaccess to the secret access key again after this dialog box closes. After you have downloaded theCSV file, choose Close.

3. If you have installed the AWS CLI, you can configure the credentials and Region for most AWS SDKsby entering aws configure at the command prompt. Otherwise, use the following instructions.

4. On your computer, navigate to your home directory, and create an .aws directory. On Unix-basedsystems, such as Linux or macOS, this is in the following location:

~/.aws

On Windows, this is in the following location:

7

Page 13: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 4: Set Up Amazon S3 Bucket Permissions for SDK Use

%HOMEPATH%\.aws

5. In the .aws directory, create a new file named credentials.6. Open the credentials CSV file that you created in step 2. Copy its contents into the credentials

file using the following format:

[default]aws_access_key_id = your_access_key_idaws_secret_access_key = your_secret_access_key

Substitute your access key ID and secret access key for your_access_key_id andyour_secret_access_key.

7. Save the Credentials file and delete the CSV file.8. In the .aws directory, create a new file named config.9. Open the config file and enter your Region in the following format.

[default]region = your_aws_region

Substitute your desired AWS Region (for example, "us-west-2") for your_aws_region.

NoteIf you don't select a Region, then us-east-1 is used by default.

10. Save the config file.

Step 4: Set Up Amazon S3 Bucket Permissions forSDK Use

To train models with the SDK, Amazon Rekognition Custom Labels needs access to the Amazon S3buckets that hold your training and testing images. If your images are stored in the Amazon S3 bucketthat Amazon Rekognition Custom Labels creates on your behalf, when you first open the console, thepermissions are already set up.

If your images are in different buckets, such as buckets that contain the images for an AmazonSageMaker Ground Truth manifest, you need to add the following policies. For information aboutapplying permissions policy to an Amazon S3 bucket, see How Do I Add an S3 Bucket Policy?.

To set up permissions policy

1. Add the following policy to the input bucket. Replace my-read-bucket with the name of yourbucket.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSRekognitionS3AclBucketRead20191011", "Effect": "Allow", "Principal": { "Service": "rekognition.amazonaws.com" }, "Action": ["s3:GetBucketAcl", "s3:GetBucketLocation"],

8

Page 14: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDecrypting Files Encrypted withAWS Key Management Service

"Resource": "arn:aws:s3:::my-read-bucket" }, { "Sid": "AWSRekognitionS3GetBucket20191011", "Effect": "Allow", "Principal": { "Service": "rekognition.amazonaws.com" }, "Action": ["s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectVersion", "s3:GetObjectTagging"], "Resource": "arn:aws:s3:::my-read-bucket/*" } ]}

2. Add the following policy to the output bucket. Replace my-write-bucket and my-write-prefixwith the folder name where output is placed.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSRekognitionS3ACLBucketWrite20191011", "Effect": "Allow", "Principal": { "Service": "rekognition.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::my-write-bucket" }, { "Sid": "AWSRekognitionS3PutObject20191011", "Effect": "Allow", "Principal": { "Service": "rekognition.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-write-bucket/my-write-prefix/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ]}

Decrypting Files Encrypted with AWS KeyManagement ServiceIf you use AWS Key Management Service (KMS) to encrypt your Amazon Rekognition Custom Labelsmanifest files, image files, or the training output files, Amazon Rekognition Custom Labels needs todecrypt your files before they can be used to train a model. To give Amazon Rekognition Custom Labelspermission to access your KMS keys, update the key policy for your customer master key (CMK) with thefollowing policy. For more information, see Changing a key policy.

{ "Sid": "AllowRekognition",

9

Page 15: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 5: Set Up Console Permissions

"Effect": "Allow", "Principal": { "Service": "rekognition.amazonaws.com" }, "Action": [ "kms:GenerateDataKey*", "kms:Decrypt" ], "Resource": "*" }

Step 5: Set Up Amazon Rekognition Custom LabelsConsole Permissions

Console AccessThe Identity and Access Management (IAM) user or group that uses the Amazon Rekognition CustomLabels consoles needs the following IAM policy that covers Amazon S3, Amazon SageMaker GroundTruth, and Amazon Rekognition Custom Labels. For information about adding IAM policies, see CreatingIAM Policies.

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:HeadBucket", "s3:ListAllMyBuckets" ], "Resource": "*" }, { "Sid": "s3Policies", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:CreateBucket", "s3:GetBucketAcl", "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectVersion", "s3:GetObjectTagging", "s3:GetBucketVersioning", "s3:GetObjectVersionTagging", "s3:PutBucketCORS", "s3:PutLifecycleConfiguration", "s3:PutBucketPolicy", "s3:PutObject", "s3:PutObjectTagging", "s3:PutBucketVersioning", "s3:PutObjectVersionTagging" ], "Resource": [ "arn:aws:s3:::custom-labels-console-*", "arn:aws:s3:::rekognition-video-console-*" ]

10

Page 16: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideExternal Amazon S3 Buckets

}, { "Sid": "rekognitionPolicies", "Effect": "Allow", "Action": [ "rekognition:*" ], "Resource": "*" }, { "Sid": "groundTruthPolicies", "Effect": "Allow", "Action": [ "groundtruthlabeling:*" ], "Resource": "*" } ]}

External Amazon S3 BucketsAdditionally, if you intend to use your own Amazon S3 bucket to upload the images or manifest file tothe console, you must add the following policy block to the preceding policy. Replace my-bucket withthe name of the bucket.

{ "Sid": "s3ExternalBucketPolicies", "Effect": "Allow", "Action": [ "s3:GetBucketAcl", "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectVersion", "s3:GetObjectTagging", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::my-bucket*" ]}

11

Page 17: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuidePrepare your Images

Getting Started with AmazonRekognition Custom Labels

The Getting Started instructions show you how to create, train, evaluate, and use a model that detectsobjects, scenes, and concepts in images. The general workflow is as follows:

To prepare, evaluate, train, and use the model to analyze images, you do the following.

Prepare your ImagesCollect the images that contain the objects, scenes, and concepts that are specific to your businessneeds. For example, you can find your logo in social media posts, identify your products on store shelves,classify machine parts in an assembly line, distinguish healthy and infected plants, or detect animatedcharacters in videos. Later in these Getting Started instructions you label the images so that they can beused to train and test a model.

Create a ProjectCreate a project to manage the files used to create a model. A project is where you manage your modelfiles, train and evaluate your model, and make it ready for use. The first time you open the AmazonRekognition Custom Labels console in a supported AWS Region, you are asked to create an AmazonS3 bucket (console bucket) to store your project files. The format of the bucket name is custom-labels-console-{region}-{random value}. The random value ensures that there isn't a collision between bucketnames.

Create a DatasetTo create a dataset, import labeled or unlabeled images. A dataset is a set of images and labels thatdescribe those images. You use datasets to train and test the models you create. A dataset is stored inthe Amazon SageMaker Ground Truth format manifest. It's important to know how you want to use yourdataset. For more information, see Purposing Datasets (p. 42).

12

Page 18: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreate a Test Dataset

You create your dataset by importing the images used to train the model. You can bring in imagesfrom Amazon S3, your local computer, an Amazon SageMaker Ground Truth manifest file, or an alreadylabeled dataset.

For unlabeled images, you have to annotate the images with the labels you want to train the model with.This can be at the image level or, for objects within the image, bounding boxes that isolate the objectwithin an image.

For these Getting Started instructions, you use images loaded from your computer. The images areunlabeled.

Create a Test DatasetTrain your model by using the training dataset. Before training your model, you choose the set of imagesthat Amazon Rekognition Custom Labels uses to evaluate your model. You can create a new test dataset,use an existing test dataset, or split the training dataset that you just created. For these Getting Startedinstructions, you split the training dataset.

Train Your ModelTrain your model with the training dataset. At the start of training, Amazon Rekognition Custom Labelschooses the most suitable algorithm to train the model with. The model is trained and then tested usingthe test dataset.

Evaluate Your ModelEvaluate the performance of your model by using metrics created during testing.

Evaluations enable you to understand the performance of your trained model, and decide if you're readyto use it in production. If improvements are needed, you can add more training data or improve labeling.

Use Your ModelStart your trained model and use it to detect custom labels in new images.

Improve the Performance of Your ModelYou can give feedback on the predictions your model makes and use it to make improvements to yourmodel. For more information, see Model Feedback Solution (p. 103).

Getting Started with the Amazon RekognitionCustom Labels Console

These Getting Started instructions show you how to create and train an Amazon Rekognition CustomLabels model by using the Amazon Rekognition Custom Labels console. It also shows you how to startthe model and use it to detect custom labels in new images.

13

Page 19: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 1: Prepare Your Images

NoteIf you are training your model to detect object, scenes, or concepts, you don't need to do Step6: Locate Objects with Bounding Boxes (p. 19). Similarly, if you're training a model to detectthe location of objects in an image, you don't need to do Step 5: Identify Objects, Scenes andConcepts with Image-Level Labels (p. 18).

Topics

• Step 1: Prepare Your Images (p. 14)

• Step 2: Create Your First Project (p. 14)

• Step 3: Create a Dataset (p. 15)

• Step 4: Create Labels (p. 17)

• Step 5: Identify Objects, Scenes and Concepts with Image-Level Labels (p. 18)

• Step 6: Locate Objects with Bounding Boxes (p. 19)

• Step 7: Train Your Model (p. 21)

• Step 8: Evaluate Your Model (p. 22)

• Step 9: Start Your Model (p. 23)

• Step 10: Analyze Images with Your Model (p. 24)

• Step 11: Stop Your Model (p. 24)

Step 1: Prepare Your ImagesAmazon Rekognition Custom Labels requires images to train and test your model. To prepare yourimages, consider following:

• Choose a specific domain for the model you want to create. For example, you could choose a model forscenic views and another model for objects such as machine parts. Amazon Rekognition Custom Labelsworks best if your images are in the chosen domain.

• Use at least 10 images to train your model.

• Images must be in PNG or JPEG format.

• Use images that show the object in a variety of lightings, backgrounds, and resolutions.

• Training and testing images should be similar to the images that you want to use the model with.

• Decide what labels to assign to the images. For more information, see Assigning Image-Level Labels toan Image (p. 63).

• Ensure that images are sufficiently large in terms of resolution. For more information, see Limits inAmazon Rekognition Custom Labels (p. 110).

• Ensure that occlusions don't obscure objects that you want to detect.

• Use images that have sufficient contrast with the background.

• Use images that are bright and sharp. Avoid using images that may be blurry due to subject andcamera motion as much as possible.

• Use an image where the object occupies a large proportion of the image.

Step 2: Create Your First ProjectThe Amazon Rekognition Custom Labels console is where you create and manage your models. The firsttime you use the console, Amazon Rekognition Custom Labels asks to create an Amazon S3 bucket inyour account. The bucket is used to store your Amazon Rekognition Custom Labels projects, datasets,and models. You can't use the Amazon Rekognition Custom Labels console unless the bucket is created.

14

Page 20: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 3: Create a Dataset

Within Amazon Rekognition Custom Labels, you use projects to manage the models that you create. Aproject manages the input images and labels, datasets, model training, model evaluation, and runningthe model. For more information, see Creating an Amazon Rekognition Custom Labels Project (p. 32).

To create an Amazon Rekognition Custom Labels project

1. Sign in to the AWS Management Console and open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

2. In the left pane, choose Use Custom Labels. The Amazon Rekognition Custom Labels landing pageis shown. If you don't see Use Custom Labels, check that the AWS Region you are using supportsAmazon Rekognition Custom Labels.

3. Choose Get started.

4. If this is the first time that you've opened the console in the current AWS Region, the First Time SetUp dialog box is shown. Do the following:

a. Note the name of the Amazon S3 bucket that's shown.

b. Choose Create S3 bucket to let Amazon Rekognition Custom Labels create an Amazon S3bucket (console bucket) on your behalf.

5. In the Create Project section, enter a name for your project.

6. Choose Create project to create your project. The project page is shown. A success message shouldbe shown at the top of the page.

Step 3: Create a DatasetIn this step, you create a training Dataset (p. 4) that holds your images and the label data needed to trainyour model. You add images to the dataset using images that you upload from your computer. You canalso add images and label data to a dataset in other ways. For more information, see the section called

15

Page 21: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 3: Create a Dataset

“Creating a Dataset” (p. 42). Later, you create another dataset to test your model by splitting yourtraining dataset. For information about other ways that you can create a testing dataset, see Training anAmazon Rekognition Custom Labels Model (p. 67).

For more information about the workflow for creating and managing a dataset, see Managing aDataset (p. 40).

To ensure that Amazon Rekognition Custom Labels chooses the best algorithm to train your model,we recommend that you use domain-specific datasets. For example, create a dataset specifically fordetecting scenic views, and use another dataset for a model that detects the location of a logo on animage. For more information, see Purposing Datasets (p. 42).

The images you upload won't have labels associated with them. You add them in subsequent steps.

To create a training dataset (console)

1. In the Datasets section of the project page, choose Create dataset.

2. On the Create Dataset details page, enter a name for your dataset.

3. Choose Upload images from your computer.

4. Choose Submit.

5. A Tool Guide is shown. Choose Next to read each page. After you have finished reading, the datasetgallery is shown. The gallery is placed in labeling mode so that you can add your images.

16

Page 22: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 4: Create Labels

6. Choose + Add Images. The Add images dialog box is shown.

7. Drag and drop your image onto the dialog box, or choose Choose files to select the images fromyour computer.

You can load up to 30 images at a time.

8. Choose Add images. The images are uploaded and added to the dataset. You should see the datasetgallery page with your images.

9. If you need to add more images, do steps 6-8 until all of your images are added.

Step 4: Create LabelsAfter uploading your images into a dataset, you use the dataset gallery to add new labels to the dataset.A label identifies an object, scene, concept, or bounding box around an object in an image. For example,if your dataset contains images of dogs, you might add labels for breeds of dogs. Your dataset needs atleast two labels if you are using it to create a model that identifies objects, scenes, and concepts. Youonly need one label if your dataset is used to create a model that finds the location of objects on animage. For more information, see Purposing Datasets (p. 42).

NoteAmazon Rekognition Custom Labels doesn't use Exif orientation metadata in images to rotateimages for display. Depending on the orientation of the camera at the time the image wasphotographed, images in the dataset gallery might appear with the wrong orientation. Thisdoesn't affect the training of your model.

In this step, you create the labels that you assign to your images as a whole (image-level) or assign tobounding boxes around objects in your images.

To create a label

1. Decide on the names of the labels that you want to use.

17

Page 23: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 5: Identify Objects, Scenes andConcepts with Image-Level Labels

2. If you aren't already in labeling mode, choose Start labeling in the dataset gallery, to enter labelingmode.

3. In the Filter by labels section of the dataset gallery, choose Add to open the Edit labels dialog box.

4. Enter a new label name.

5. Choose Add label.

6. Repeat steps 4 and 5 until you have created all the labels you need.

7. Create Save to save the labels that you added.

Step 5: Identify Objects, Scenes and Concepts withImage-Level LabelsTo train your model to detect objects, scenes, and concepts in your images, Amazon Rekognition CustomLabels requires the images in your dataset to be labeled with image-level labels.

NoteIf you're training a model to detect the location of objects on an image, you don't need to dothis step.

An image-level label applies to an image as a whole. For example, to train a model to detect scenicviews, the labels for the following image might be river, countryside or sky. An image needs at least onelabel. Your dataset needs at least two labels. In this step, you add image-level labels to an image.

To assign image-level labels to images

1. In the dataset gallery, select one or more images that you want to add labels to. You can only selectimages on a single page at a time. To select the images between a first and second image on a page:

a. Select the first image.

18

Page 24: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 6: Locate Objects with Bounding Boxes

b. Press and hold the shift key.

c. Select the second image. The images between the first and second image are also selected.

d. Release the shift key.

2. Choose Assign labels.

3. In the Assign a label to selected images dialog box, choose a label that you want to assign to theimage or images.

4. Choose Assign to assign the label to the image.

5. Repeat labeling until every image is assigned the required labels.

6. Choose Save changes to save your changes.

Step 6: Locate Objects with Bounding BoxesIf you want your model to detect the location of objects within an image, you need to identify whatthe object is and where it is in the image. A bounding box is a box that isolates an object in an image.You use bounding boxes to train a model to detect different objects in the same image. You identify theobject by assigning a label to the bounding box.

NoteIf you're training a model to find objects, scenes, and concepts, you don't need to do this step.

19

Page 25: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 6: Locate Objects with Bounding Boxes

For example, if you want to train a model that detects Amazon Echo Dot devices, you draw a boundingbox around each Echo Dot in an image and assign a label named Echo Dot to the bounding box. Thefollowing image shows a bounding box around an Echo Dot device. The image also contains an AmazonEcho without a bounding box.

In this step, you use the console to draw bounding boxes around the objects in your images. You alsoidentify objects within the image by assigning labels to the bounding box.

Before you can add bounding boxes, you must add at least one label to the dataset. For moreinformation, see Step 4: Create Labels (p. 17).

To add a bounding box

1. In the dataset gallery, choose the images that you want to add bounding boxes to.

2. Choose Draw bounding box. A series of tips are shown before the bounding box editor is displayed.

3. In the Labels pane on the right, select the label that you want to assign to a bounding box.

4. In the drawing tool, place your pointer at the top-left area of the desired object.

5. Press the left mouse button and draw a box around the object. Try to draw the bounding box asclose as possible to the object.

6. Release the mouse button. The bounding box is highlighted.

7. Choose Next if you have more images to label. Otherwise, choose Done to finish labeling.

20

Page 26: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 7: Train Your Model

8. Repeat steps 1–7 until you have created a bounding box in each image that contains objects.

9. Choose Save changes to save your changes.

10. Choose Exit to exit labeling mode.

Step 7: Train Your ModelTo train the model, you use the dataset that you created in Step 3: Create a Dataset (p. 15). Youalso need a dataset to test the model. In this step, you create a testing dataset by splitting the trainingdataset. You can also choose to create a new dataset or use an existing dataset for training. The originaldataset remains unchanged. For more information, see Training an Amazon Rekognition Custom LabelsModel (p. 67).

A new version of a model is created every time the model is trained. Amazon Rekognition Custom Labelscreates a name for the model that is a combination of the project name and the timestamp for when themodel is created.

NoteYou are charged for the amount of time that it takes to train a model. For more information, seeTraining hours.

To a train a model (console)

1. In the dataset gallery, choose Train Model and the Train model page is displayed.

2. In Training details, do the following:

a. In Choose project, choose the project that you created earlier.

b. In Choose training set, choose the training dataset that you previously created.

c. In Create test set, choose Split training dataset.

21

Page 27: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 8: Evaluate Your Model

3. Choose Train to start training the model. The project resources page is shown.

4. Wait until training has finished. You can check the current status in the Status field of the Modelsection.

Step 8: Evaluate Your ModelDuring training, the model is evaluated for its performance against the test dataset. The labels in thetest dataset are considered 'ground truth' as they represent what the actual image represents. Duringevaluation, the model makes predictions using the test dataset. The predicted labels are compared withthe ground truth labels and the results are available in the console evaluation page.

The Amazon Rekognition Custom Labels console shows summary metrics for the entire modeland metrics for individual labels. The metrics available in the console are precision recall, F1 score,confidence, and confidence threshold. For more information, see Evaluating a Trained AmazonRekognition Custom Labels Model (p. 73).

You can use the console to focus on individual metrics. For example, to investigate precision issues for alabel, you can filter the training results by label and by false positive results. For more information, seeMetrics for Evaluating Your Model (p. 74).

After training, the training dataset is read-only. If you decide to improve the model, you can copy thetraining dataset to a new dataset. You use the copy of the dataset to train a new version of the model.

In this step, you use the console to access the training results in the console.

To access training results (console)

1. On the Projects resources page, choose the project that contains the trained model that you want toevaluate.

2. In the Model section, choose the model that you want to see training results for. The summaryresults are shown.

22

Page 28: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 9: Start Your Model

3. Choose View test results to see the test results for individual images. A series of tips is shown beforethe test results are shown. For more information about image test results, see Metrics for EvaluatingYour Model (p. 74).

4. Use the metrics to evaluate the performance of the model. For more information, see Improving anAmazon Rekognition Custom Labels Model (p. 85).

Step 9: Start Your ModelIf you're happy with the performance of your model, you make it available for use by starting it. Theconsole provides example code that you use to start the model. To run the example code, you need toset up the AWS SDK. For more information, see the section called “Step 3: Set Up the AWS CLI and AWSSDKs” (p. 7).

NoteYou are charged for the amount of time, in minutes, that the model is running. For moreinformation, see Inference hours.

Starting a model takes a while to complete. You can use the console to determine if the model isrunning. Alternatively you can use the DescribeProjectVersions API to get the current status. Formore information, see Running a Trained Amazon Rekognition Custom Labels Model (p. 86).

To start a model (console)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess andAmazonS3ReadOnlyAccess permissions. For more information, see Step 2: Create an IAMAdministrator User and Group (p. 6).

b. Install and configure the latest version of the AWS CLI. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. On the Projects resources page, choose the project that contains the trained model that you want tostart.

3. In the Models section, choose the model that you want to start. The summary results are shown.4. In the Use model section, choose API Code.5. At the command prompt, use the AWS CLI command that calls start-project-version to start

your model. The value of --project-version-arn should be the Amazon Resource Name (ARN)of your model.

23

Page 29: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 10: Analyze Images with Your Model

aws rekognition start-project-version \ --project-version-arn "model_arn" \ --min-inference-units 1\ --region us-east-1

6. Choose your project name at the top of the page to go back to the project overview page.7. In the Model section, check the status of the model. When the status is The model is running, you

can use the model to analyze images.

Step 10: Analyze Images with Your ModelAfter you've started your model, you can use it to analyze images. In this step, you use example codefrom the Amazon Rekognition Custom Labels console. For an example that draws bounding boxesaround detected custom labels, see Analyzing an Image with a Trained Model (p. 96).

To run the example code, you need to set up the AWS SDK. For more information, see the section called“Step 3: Set Up the AWS CLI and AWS SDKs” (p. 7).

To analyze an image (console)

1. Upload an image that contains text to an S3 bucket.

For instructions, see Uploading Objects into Amazon S3 in the Amazon Simple Storage ServiceConsole User Guide.

2. On the Projects resources page, choose the project that contains the trained model that you want touse. The project Evaluation Summary page for the model is shown.

3. In the Use Model section, choose API code.4. At the command prompt, use the code snippet that calls detect-custom-labels to analyze an

image. It should look like the following example. The value of --project-version-arn shouldbe Amazon Resource Name (ARN) of your model. Change MY_BUCKET and PATH_TO_MY_IMAGE tothe Amazon S3 bucket and image that you used in step 1. For more information, see Step 7: AnalyzeImages with Your Model (p. 30).

aws rekognition detect-custom-labels \ --project-version-arn "model_arn" \ --image "{"S3Object": {"Bucket": "MY_BUCKET","Name": "PATH_TO_MY_IMAGE"}}" \ --region us-east-1

Step 11: Stop Your ModelYou are charged for the amount of time your model is running. If you have finished using the model, youshould stop it. The console provides example code that you use to stop the model. For more information,see Running a Trained Amazon Rekognition Custom Labels Model (p. 86).

To run the example code, you need to set up the AWS SDK. For more information, see the section called“Step 3: Set Up the AWS CLI and AWS SDKs” (p. 7).

Stopping a model takes a while to complete. Check the console to determine if the model is running.Alternatively you can use the DescribeProjectVersions API to get the current status.

To stop a model (console)

1. On the Projects resources page, choose the project that contains the trained model that you want tostop.

24

Page 30: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideGetting Started (SDK)

2. In the Models section, choose the model that you want to stop. The summary results are shown.

3. In the Use model section, choose API Code.

4. At the command prompt, use the AWS CLI command that calls stop-project-version. It shouldlook similar to the following example. The value of --project-version-arn should be AmazonResource Name (ARN) of your model. For more information, see Step 8: Stop Your Model (p. 31).

aws rekognition stop-project-version \ --project-version-arn "model_arn" \ --region us-east-1

5. Choose your project name at the top of the page to go back to the project overview page.

6. In the Model section, check the status of the model. The model has successfully stopped when thestatus is The model is ready to run.

Getting Started with the Amazon RekognitionCustom Labels SDK

Amazon Rekognition Custom Labels provides an API that you can use to create Amazon RekognitionCustom Labels projects, train and evaluate models, run models, and analyze images using your model.

For information about the general flow of creating and using a model, see Getting Started with AmazonRekognition Custom Labels (p. 12).

NoteThe Amazon Rekognition Custom Labels API is documented as part of the Amazon RekognitionAPI reference content. For more information, see API Reference (p. 112).

This section provides AWS CLI command examples. For information about formatting CLI commands, seeFormatting the AWS CLI Examples.

Creating a ProjectTo create a model with the API, use CreateProject. For more information, see Creating an AmazonRekognition Custom Labels Project (SDK) (p. 33). After you create a project, you need to create or adda dataset by using the Amazon Rekognition Custom Labels console. For more information, see Creatingan Amazon Rekognition Custom Labels Dataset (p. 42).

Creating a DatasetA dataset contains the images, labels, and bounding box information that you use to train a model.Alternatively, you can use an Amazon SageMaker Ground Truth manifest file to train a model. TheAmazon Rekognition Custom Labels API doesn't create datasets, import images, or label images. Todo these tasks, you need to use the Amazon Rekognition Custom Labels console or provide your ownAmazon SageMaker Ground Truth format manifest file. An option for creating a manifest file is to createan Amazon SageMaker Ground Truth labeling job. For more information, see Amazon SageMaker GroundTruth.

Dataset files are stored in an Amazon S3 bucket. If you aren't using a dataset that was created in theAmazon Rekognition Custom Labels console, you need to add permissions to your bucket or buckets sothat Amazon Rekognition Custom Labels can access the bucket contents. For more information, see Step4: Set Up Amazon S3 Bucket Permissions for SDK Use (p. 8).

25

Page 31: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining and Evaluating a Model

For more information about the workflow for creating and managing a dataset, see Managing aDataset (p. 40).

Training and Evaluating a ModelYou can use the Amazon Rekognition Custom Labels APIs to train and evaluate a model.

Training a model

You train an Amazon Rekognition Custom Labels model by calling CreateProjectVersion. A new versionof a model is created each time it is trained. CreateProjectVersion requires a training dataset and atesting dataset.

If you choose, you can create a testing dataset by splitting your training dataset in an 80 (training)/20(testing) split. When a model is trained, its performance is evaluated and the results placed in an S3bucket. You can choose to place the training results in the same bucket as the console S3 bucket, or adifferent S3 bucket that you choose. For more information, see Training a Model (SDK) (p. 68).

You are charged for the amount of time that it takes to train a model. For more information, see Traininghours.

Evaluating a model

To evaluate a model, you call DescribeProjectVersions to get the training results. The training resultsinclude metrics that aren't available in the console, such as a confusion matrix for classification results.The training results are returned in the following formats:

• F1 score – A single value representing the overall performance of precision and recall for the model.For more information, see Overall Model Performance (p. 75).

• Summary file location – The training summary includes aggregated evaluation metrics for the entiretesting dataset and metrics for each individual label. DescribeProjectVersions returns the S3bucket and folder location of the summary file. For more information, see Summary File (p. 77).

• Evaluation manifest snapshot location – The snapshot contains detailed information about the testresults including the confidence ratings and the results of binary classification tests, such as falsepositives. DescribeProjectVersions returns the S3 bucket and folder location of the snapshotfiles. For more information, see Evaluation Manifest Snapshot (p. 78).

For more information about evaluating a model, see Evaluating a Trained Amazon Rekognition CustomLabels Model (p. 73).

Running and Using a ModelAmazon Rekognition Custom Labels provides an API that you use, with your model, to detect customlabels in images. The model must be running before you can detect custom labels.

Running a model

When your model is ready to use, you start it by calling StartProjectVersion. For more information, seeStarting an Amazon Rekognition Custom Labels Model (SDK) (p. 87). You are charged for the amountof time, in minutes, that the model is running. For more information, see Inference hours. To stop arunning model, call StopProjectVersion. For more information, see Stopping an Amazon RekognitionCustom Labels Model (SDK) (p. 89).

Analyzing images

To analyze new images with your running model, you call DetectCustomLabels. This operation takes aninput image from either an Amazon S3 bucket or from a local file system. It also requires the Amazon

26

Page 32: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 1: Prepare Your Images

Resource Name (ARN) of the model that you want to use. The response includes predictions for thepresence of custom labels in the input image. A custom label has an associated confidence score thattells you the model's confidence in the accuracy of the prediction. Object location information for objectsdetected in the image is also returned. For more information, see Analyzing an Image with a TrainedModel (p. 96).

Step 1: Prepare Your ImagesAmazon Rekognition Custom Labels requires images to train and test your model. To prepare yourimages, consider following:

• Choose a specific domain for the model you want to create. For example, you could choose a model forscenic views and another model for objects such as machine parts. Amazon Rekognition Custom Labelsworks best if your images are in the chosen domain.

• Use at least 10 images to train your model.

• Images must be in PNG or JPEG format.

• Use images that show the object in a variety of lightings, backgrounds, and resolutions.

• Training and testing images should be similar to the images that you want to use the model with.

• Decide what labels to assign to the images. For more information, see Assigning Image-Level Labels toan Image (p. 63).

• Ensure that images are sufficiently large in terms of resolution. For more information, see Limits inAmazon Rekognition Custom Labels (p. 110).

• Ensure that occlusions don't obscure objects that you want to detect.

• Use images that have sufficient contrast with the background.

• Use images that are bright and sharp. Avoid using images that may be blurry due to subject andcamera motion as much as possible.

• Use an image where the object occupies a large proportion of the image.

Step 2: Create a ProjectYou use a project to manage your models. A project contains the datasets and models that you create. Inthis step, you create a project using CreateProject API.

To create a project (CLI)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess andAmazonS3FullAccess permissions. For more information, see Step 2: Create an IAMAdministrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

c. Set permissions to access the Amazon Rekognition Custom Labels console. For moreinformation, see Step 5: Set Up Amazon Rekognition Custom Labels Console Permissions (p. 10).

2. At the command prompt, enter the following command. Replace my_project with a project nameof your choice.

aws rekognition create-project --project-name my_project

3. Note the name of the project Amazon Resource Name (ARN) that's displayed in the response. You'llneed it to create a model.

27

Page 33: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 3: Prepare Your Dataset

Step 3: Prepare Your DatasetYou need a dataset to train a model. A dataset contains the labeled images that you prepared in Step 1:Prepare Your Images (p. 27). You can create a dataset using the Amazon Rekognition Custom Labelsconsole. Alternatively, you can use an Amazon SageMaker Ground Truth manifest format file and images.For more information, see Managing a Dataset (p. 40).

For Getting Started, create a dataset by using the Amazon Rekognition Custom Labels console. For moreinformation, see Step 3: Create a Dataset (p. 15).

Step 4: Train Your ModelYou train an Amazon Rekognition Custom Labels model by calling CreateProjectVersion. A new versionof a model is created each time it is trained. CreateProjectVersion requires a training dataset anda testing dataset. In this step, you create a testing dataset by splitting your training dataset in an 80(training)/20 (testing) split. When a model is trained, its performance is evaluated and the results placedin an S3 bucket. You can choose to place the training results in the same bucket as the console S3 bucket,or an S3 bucket that you choose. For more information, see Training a Model (SDK) (p. 68).

In this step, you train a model using the dataset you prepared in Step 3: Prepare Your Dataset (p. 28).

To train a model (SDK)

1. At the command prompt, enter the following command. Replace the following:

• my_project_arn with the Amazon Resource Name (ARN) of the project you created in Step 2:Create a Project (p. ).

• version_name with a unique version name of your choosing.

• output_bucket with the name of the Amazon S3 bucket where Amazon Rekognition CustomLabels saves the training results.

• output_folder with the name of the folder where the training results are saved.

• training_bucket with the name of the Amazon S3 bucket that contains the manifest file andimages.

• training_manifest with the name and path to the manifest file.

aws rekognition create-project-version\ --project-arn "my_project_arn"\ --version-name "version_name"\ --output-config '{"S3Bucket":"output bucket", "S3KeyPrefix":"output_folder"}'\ --training-data '{"Assets": [{ "GroundTruthManifest": { "S3Object": { "Bucket": "training_bucket", "Name": "training_manifest" } } } ] }'\ --testing-data '{"AutoCreate":true }'

2. Note the value of version_name because you need it in the next step.

3. Use the following command to get the current status of the model training. Training is completewhen the value of Status is TRAINING_COMPLETED.

Replace project_arn with the ARN of the project you created in the section called “Step 2: Createa Project” (p. 27). Replace version_name with the name of the version that you used in Step 4:Train Your Model (p. 28).

aws rekognition describe-project-versions --project-arn "project_arn"\ --version-names "version_name"

28

Page 34: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 5: Evaluate Your Model

Step 5: Evaluate Your ModelTo evaluate a model, you call DescribeProjectVersions to get the training results. The training resultsinclude metrics that aren't available in the console, such as a confusion matrix for classification results.The training results are returned in the following formats:

• F1 score – A single value representing the overall performance of precision and recall for the model.For more information, see Overall Model Performance (p. 75).

• Summary file location – The training summary includes aggregated evaluation metrics for the entiretesting dataset and metrics for each individual label. DescribeProjectVersions returns theAmazon S3 bucket and folder location of the summary file. For more information, see SummaryFile (p. 77).

• Evaluation manifest snapshot location – The snapshot contains detailed information about the testresults including the confidence ratings and the results of classification tests, such as false positives.DescribeProjectVersions returns the Amazon S3 bucket and folder location of the snapshot files.For more information, see Evaluation Manifest Snapshot (p. 78).

In this step, you use the DescribeProjectVersions API to get the location of the summary file andthe evaluation manifest snapshot. Use the information to evaluate the performance of your model. Formore information, see Evaluating a Trained Amazon Rekognition Custom Labels Model (p. 73). If youneed to improve the performance of your model, see Improving an Amazon Rekognition Custom LabelsModel (p. 85).

To access training results (SDK)

• At the command prompt, enter the following command. Replace project_arn with the AmazonResource Name (ARN) of the project you created in Step 2: Create a Project (p. 27). Replaceversion_name with the version name you used in Step 4: Train Your Model (p. 28).

aws rekognition describe-project-versions --project-arn "project_arn"\ --version-names "version_name"

The training results F1 score and summary information are in EvaluationResult. For example:

"EvaluationResult": { "F1Score": 0.39382708072662354, "Summary": { "S3Object": { "Bucket": "my bucket", "Name": "output/EvaluationResultSummary.json" } } }

The evaluation manifest snapshot is stored in the location specified in the --output-config inputparameter that you specified in Step 4: Train Your Model (p. 28).

Step 6: Start Your ModelWhen your model is ready to use, you start it by calling StartProjectVersion. For more information, seeStarting an Amazon Rekognition Custom Labels Model (SDK) (p. 87). To stop a running model, callStopProjectVersion.

In this step, you start the model.

29

Page 35: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 7: Analyze Images with Your Model

To start a model

• At the command prompt, enter the following command. Replace model_arn with the ARN of themodel that you trained in Step 4: Train Your Model (p. 28).

aws rekognition start-project-version --project-version-arn model_arn\ --min-inference-units "1"

Step 7: Analyze Images with Your ModelTo analyze images with your model, you call DetectCustomLabels. This operation takes an input imagefrom either an Amazon S3 bucket or from a local file system. The response includes the custom labelsfor objects, scenes, and concepts found in the input image. A custom label has an associated confidencescore that tells you Amazon Rekognition Custom Labels's confidence in the accuracy of the predictedcustom label. Object location bounding boxes for objects that are detected in the image are alsoreturned. For more information, see Analyzing an Image with a Trained Model (p. 96).

In this step, you analyze an image for custom labels using an image stored in an Amazon S3 bucket.

To detect customs labels in an image

1. Upload an image that contains text to an S3 bucket.

For instructions, see Uploading Objects into Amazon S3 in the Amazon Simple Storage ServiceConsole User Guide.

2. At the command prompt, enter the following command. Replace the following values:

• bucket with the name of Amazon S3 bucket that you used in step 1.

• image with the name of the input image file you uploaded in step 1.

• model_arn with the ARN of the model you created in Step 4: Train Your Model (p. 28).

aws rekognition detect-custom-labels --project-version-arn "model_arn"\ --image '{"S3Object":{"Bucket":"bucket","Name":"image"}}'\ --min-confidence 70

The output should be similar to the following. If you don't get any results, try lowering the value of --min-confidence.

{ "CustomLabels": [ { "Name": "MyLogo", "Confidence": 77.7729721069336, "Geometry": { "BoundingBox": { "Width": 0.198987677693367, "Height": 0.31296101212501526, "Left": 0.07924537360668182, "Top": 0.4037395715713501 } } } ]}

30

Page 36: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStep 8: Stop Your Model

Step 8: Stop Your ModelIf you don't need to use your model, you can stop it by calling StopProjectVersion.

NoteYou are charged for the amount of time your model is running.

To stop a model

• At the command prompt, enter the following command. Replace model_arn with the ARN of themodel that you started in Step 6: Start Your Model (p. 29).

aws rekognition stop-project-version --project-version-arn model_arn

31

Page 37: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating a Project

Managing an Amazon RekognitionCustom Labels Project

An Amazon Rekognition Custom Labels project is a grouping of the resources that are needed to createand manage an Amazon Rekognition Custom Labels model. You use a project to manage your datasetsand models

Dataset – The labeled images used to train a model. There can be multiple datasets in a project. To traina model, you need a training dataset and a test dataset. For more information, see Creating an AmazonRekognition Custom Labels Dataset (p. 42).

Model – A machine learning model created by Amazon Rekognition Custom Labels that analyzes imagesfor objects, scenes, and concepts. For more information, see Training an Amazon Rekognition CustomLabels Model (p. 67).

If you no longer need a project, you can delete it with the Amazon Rekognition Custom Labels console orwith the API.

Topics• Creating an Amazon Rekognition Custom Labels Project (p. 32)• Deleting an Amazon Rekognition Custom Labels Project (p. 35)

Creating an Amazon Rekognition Custom LabelsProject

You can create a project with the Amazon Rekognition Custom Labels console or with the API. Whenyou first use the console, you specify the Amazon S3 bucket where Amazon Rekognition Custom Labelsstores your project files. If you're using the API, you can also use Amazon S3 buckets that are external tothe Amazon Rekognition Custom Labels console.

Topics• Creating an Amazon Rekognition Custom Labels Project (Console) (p. 32)• Creating an Amazon Rekognition Custom Labels Project (SDK) (p. 33)

Creating an Amazon Rekognition Custom LabelsProject (Console)You can use the Amazon Rekognition Custom Labels console to create a project.

To create a project (console)

1. Sign in to the AWS Management Console and open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

2. In the left pane, choose Use Custom Labels. The Amazon Rekognition Custom Labels landing page isshown.

32

Page 38: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating a Project (SDK)

3. Choose Get started.

4. Choose Create Project.

5. If this is the first time that you've opened the console in the current AWS Region, the First Time SetUp dialog box is shown. Do the following:

a. Note the name of the Amazon S3 bucket that's shown.

b. Choose Create S3 bucket to let Amazon Rekognition Custom Labels create an Amazon S3bucket (console bucket) on your behalf.

6. In the Create project section, enter a name for your project.

7. Choose Create project to create your project. The project page is shown. A success message shouldbe shown at the top of the page.

Creating an Amazon Rekognition Custom LabelsProject (SDK)You create an Amazon Rekognition Custom Labels project by calling CreateProject. The response is anAmazon Resource Name (ARN) as an identifier for the project. After you create a project, you createdatasets for training and testing a model. For more information, see Creating an Amazon RekognitionCustom Labels Dataset (p. 42).

To create a project (SDK)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess permissions. For moreinformation, see Step 2: Create an IAM Administrator User and Group (p. 6).

33

Page 39: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating a Project (SDK)

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Use the following code to create a project.

AWS CLI

The following example creates a project and displays its ARN.

Change the value of project-name to the name of the project that you want to create.

aws rekognition create-project --project-name my_project

Python

The following example creates a project and displays its ARN.

Change the value of project_name to the name of the project you want to create.

#Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) import boto3 def create_project(project_name): client=boto3.client('rekognition') #Create a project print('Creating project:' + project_name) response=client.create_project(ProjectName=project_name) print('project ARN: ' + response['ProjectArn']) print('Done...') def main(): project_name='project' create_project(project_name) if __name__ == "__main__": main()

Java

The following example creates a project and displays its ARN.

Change the value of projectName to the name of the project you want to create.

//Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

package com.amazonaws.samples;

import com.amazonaws.services.rekognition.AmazonRekognition;import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;import com.amazonaws.services.rekognition.model.CreateProjectRequest;import com.amazonaws.services.rekognition.model.CreateProjectResult;

public class CreateProject {

34

Page 40: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Project

public static void main(String[] args) throws Exception {

String projectName="project_name";

try{ AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();

CreateProjectRequest createProjectRequest = new CreateProjectRequest() .withProjectName(projectName);

CreateProjectResult response=rekognitionClient.createProject(createProjectRequest); System.out.println("Project ARN: " + response.getProjectArn());

} catch(Exception e) { System.out.println(e.toString()); } System.out.println("Done..."); }}

3. Note the name of the project ARN that's displayed in the response. You'll need it to create a model.

Deleting an Amazon Rekognition Custom LabelsProject

You can delete a project by using the Amazon Rekognition console or by calling the DeleteProject API.To delete a project, you must first delete each associated model. A deleted project or model can't beundeleted.

Topics• Deleting an Amazon Rekognition Custom Labels Project (Console) (p. 35)• Deleting an Amazon Rekognition Custom Labels Project (SDK) (p. 37)

Deleting an Amazon Rekognition Custom LabelsProject (Console)You can delete a project from the projects page, or you can delete a project from a project's detail page.The following procedure shows you how to delete a project using the projects page.

The Amazon Rekognition Custom Labels console deletes associated models for you during projectdeletion, except when a model is running or training. If a model is running, use the StopProjectVersionAPI to stop the model. For more information, see Stopping an Amazon Rekognition Custom Labels Model(SDK) (p. 89). If a model is training, wait until it finishes before you delete the project.

To delete a project (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.2. Choose Use Custom Labels.3. Choose Get started.4. In the left navigation pane, choose Projects.

35

Page 41: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Project (Console)

5. On the Projects page, select the radio button for the project that you want to delete.

6. Choose Delete at the top of the page. The Delete project dialog box is shown.

7. If the project has no associated models:

a. Enter delete to delete the model.

b. Choose Delete to delete the project.

8. If the project has associated models:

a. Enter delete to confirm that you want to delete the model(s).

b. Choose Delete associated models. Model deletion might take a while to complete.

NoteThe console can't delete models that are in-training or running. Try again afterstopping any running models that are listed, and wait until models listed as trainingfinish.If you Close the dialog box during model deletion, the models are still deleted. Later,you can delete the project by repeating this procedure.

c. Enter delete to confirm that you want to delete the project.

d. Choose Delete to delete the project.

36

Page 42: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Project (SDK)

Deleting an Amazon Rekognition Custom LabelsProject (SDK)You delete an Amazon Rekognition Custom Labels project by calling DeleteProject and supplying theAmazon Resource Name (ARN) of the project that you want to delete. To get the ARNs of the projectsin your account, call DescribeProjects. The response includes an array of ProjectDescription objects.The project ARN is the ProjectArn field. You can use the project name to identify the ARN of theproject. For example, arn:aws:rekognition:us-east-1:123456789010:project/projectname/1234567890123.

Before you can delete a project, you must first delete all models in the project. For more information,see Deleting an Amazon Rekognition Custom Labels Model (SDK) (p. 93). The project might take a fewmoments to delete. During that time, the status of the project is DELETING. The project is deleted if asubsequent call to DescribeProjects doesn't include the project that you deleted.

To delete a project (SDK)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess permissions. For moreinformation, see Step 2: Create an IAM Administrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Use the following code to delete a project.

AWS CLI

Change the value of project-arn to the name of the project that you want to create.

aws rekognition delete-project --project-arn project_arn

Python

Change the value of project_arn to the name of the project you want to delete.

37

Page 43: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Project (SDK)

#Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) import boto3 def delete_project(project_arn): client=boto3.client('rekognition') #Delete a project print('Deleting project:' + project_arn) response=client.delete_project(ProjectArn=project_arn) print('Status: ' + response['Status']) print('Done...') def main(): project_arn='project_arn' delete_project(project_arn) if __name__ == "__main__": main()

Java

Change the value of projectArn to the ARN of the project you want to delete.

//Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

package com.amazonaws.samples;import com.amazonaws.services.rekognition.AmazonRekognition;import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;import com.amazonaws.services.rekognition.model.DeleteProjectRequest;import com.amazonaws.services.rekognition.model.DeleteProjectResult;

public class DeleteProject {

public static void main(String[] args) throws Exception {

String projectArn ="project_arn";

AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();

try { DeleteProjectRequest request = new DeleteProjectRequest() .withProjectArn(projectArn); DeleteProjectResult result = rekognitionClient.deleteProject(request); System.out.println(result.getStatus());

} catch(Exception e) { System.out.println(e.toString()); } System.out.println("Done..."); }

38

Page 44: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Project (SDK)

}

39

Page 45: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuidePrepare Your Images

Managing a DatasetIn an Amazon Rekognition Custom Labels project, datasets contain the images, assigned labels, andbounding boxes that you use to train and test a custom model. You create and manage datasets by usingthe Amazon Rekognition Custom Labels console. You can't create a dataset with the Amazon RekognitionCustom Labels API.

The following sections describe the workflow for creating and managing a dataset.

Prepare Your ImagesYou need images to train and test the model. The images represent the objects, scenes, and conceptsthat you want the model to detect. For recommendations about images, see Preparing Images (p. 41).

Create a DatasetYou create a dataset by specifying where to import the images and labels from. You can import from thefollowing locations:

• An S3 bucket• Your computer• An Amazon SageMaker Ground Truth format manifest file• An existing dataset

For more information, see Creating an Amazon Rekognition Custom Labels Dataset (p. 42).

To train a model, your dataset needs images with labels assigned to them. A label identifies the contentof an image. For example, if an image contains an image of a bridge, you assign a 'bridge' label to theimage. For more information, see Purposing Datasets (p. 42).

Review Your ImagesThe dataset gallery shows the images in a dataset. If the images don't have labels, or if you want tocreate new label categories, you can create them in the dataset gallery. For more information, see EditingLabels (p. 63).

If you imported your images with accompanying labels, the labels are shown in the left pane of thedataset gallery. You assign labels to images. For more information, see Assigning Image-Level Labelsto an Image (p. 63). You also assign labels to areas of an image that represent an object. To assignlabels to an object, you use the Amazon Rekognition Custom Labels drawing tool to draw a boundingbox around the object. For more information, see Drawing Bounding Boxes (p. 65).

Amazon Rekognition Custom Labels doesn't use Exif orientation metadata in images to rotate images fordisplay. Depending on the orientation of the camera at the time the image was photographed, imagesin the dataset gallery might appear with the wrong orientation. This doesn't affect the training of yourmodel.

40

Page 46: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuidePreparing Images

After you've labeled your images, you can train your model. For more information, see Training anAmazon Rekognition Custom Labels Model (p. 67).

Preparing ImagesThe images you use to train a model contain the object, scenes, or concepts that you want AmazonRekognition Custom Labels to find.

The content of images should be in a variety of backgrounds and lighting that represent the images thatyou want the trained model to identify. Each image needs an assigned label (or labels) that identifies theobjects, scenes, or concepts in the image. If you want to your model to find the location of objects onan image, you need to define bounding boxes for the objects on the image. For more information, seePurposing Datasets (p. 42).

This section provides information about the images you use to train and test a model.

Image FormatYou can train Amazon Rekognition Custom Labels models with images that are in PNG and in JPEGformat. Similarly, to detect custom labels using DetectCustomLabels, you need images that are inPNG and JPEG format.

Input Image RecommendationsAmazon Rekognition Custom Labels requires images to train and test your model. To prepare yourimages, consider following:

• Choose a specific domain for the model you want to create. For example, you could choose a model forscenic views and another model for objects such as machine parts. Amazon Rekognition Custom Labelsworks best if your images are in the chosen domain.

• Use at least 10 images to train your model.

• Images must be in PNG or JPEG format.

• Use images that show the object in a variety of lightings, backgrounds, and resolutions.

• Training and testing images should be similar to the images that you want to use the model with.

• Decide what labels to assign to the images. For more information, see Assigning Image-Level Labels toan Image (p. 63).

• Ensure that images are sufficiently large in terms of resolution. For more information, see Limits inAmazon Rekognition Custom Labels (p. 110).

• Ensure that occlusions don't obscure objects that you want to detect.

• Use images that have sufficient contrast with the background.

• Use images that are bright and sharp. Avoid using images that may be blurry due to subject andcamera motion as much as possible.

• Use an image where the object occupies a large proportion of the image.

Image Set SizeAmazon Rekognition Custom Labels uses a set of images to train a model. At a minimum, you should useat least 10 images for training. Amazon Rekognition Custom Labels stores training and testing images indatasets. For more information, see Creating an Amazon Rekognition Custom Labels Dataset (p. 42).

41

Page 47: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating a Dataset

Creating an Amazon Rekognition Custom LabelsDataset

Datasets contain the images, labels, and bounding box information that is used to train and test anAmazon Rekognition Custom Labels model. Datasets are managed by Amazon Rekognition CustomLabels projects. You create the initial training dataset for a project during project creation. You can alsoadd new and existing datasets to a project after the project is created. For example, you might create adataset that is used to test a model.

You can create a dataset using images from one of the following locations.

• Amazon S3 Bucket (p. 44)

• Local Computer (p. 48)

• Amazon SageMaker Ground Truth (p. 48)

• Existing Dataset (p. 51)

Purposing DatasetsWe recommend that you create separate datasets for different image categories. For example, if youhave images of scenic views and images of dogs, you should create a dogs dataset and a scenic viewsdataset.

Finding Objects, Scenes, and Concepts

If you want to create a model that predicts the presence of objects, scenes, and concepts in your images,you assign image-level labels to the images in your dataset. In this case, your dataset requires at leasttwo labels. For example, the following image shows a scene. Your dataset could include image-levellabels such sunrise or countryside.

42

Page 48: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuidePurposing Datasets

When you create your dataset, you can include image-level labels for your images. For example, if yourimages are stored in an Amazon S3 bucket, you can use folder names (p. 44) to add image-levellabels. You can also add image-level labels to images after you create a dataset. For more information,see the section called “Assigning Image-Level Labels to an Image” (p. 63). You can add new labels asyou need them. For more information, see Editing Labels (p. 63).

The default number of images per dataset is 250,000. You can request an increase for up to 500,000images per dataset. For more information, Service Quotas.

Detecting Object LocationsTo create a model that predicts the location of objects in your images, you define object locationbounding boxes and labels for the images in your dataset. A bounding box is a box that tightly surrounds

43

Page 49: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAmazon S3 Bucket

an object. For example, the following image shows bounding boxes around an Amazon Echo and anAmazon Echo Dot. Each bounding box has an assigned label (Amazon Echo or Amazon Echo Dot).

To detect object locations, your dataset needs at least one label. During model training, a further labelis automatically created that represents the area outside of the bounding boxes on an image. Whenyou create your dataset, you can include bounding box information for your images. For example, youcan import an Amazon SageMaker Ground Truth format manifest file (p. 53) that contains boundingboxes. You can also add bounding boxes after you create a dataset. For more information, see DrawingBounding Boxes (p. 65). You can add new labels as you need them. For more information, see EditingLabels (p. 63).

You can have up to 250,000 images per dataset. This value can't be increased. For more information, seeLimits in Amazon Rekognition Custom Labels (p. 110).

Combining Image-Level and Bounding Box Labeled Images

You can combine image-level labels and bounding box labeled images in a single dataset. In this case,Amazon Rekognition Custom Labels chooses whether to create an image-level model or an objectlocation model.

For information about adding labels to a dataset, see Editing Labels (p. 63).

Amazon S3 BucketThe images are imported from an Amazon S3 bucket. During dataset creation, you can choose to assignlabel names to images based on the name of the folder that contains the images. The folder(s) must bea child of the Amazon S3 folder path that you specify in S3 folder location during dataset creation. Tocreate a dataset, see Creating a Dataset by Importing Images from an S3 Bucket (p. 45).

For example, assume the following folder structure in an Amazon S3 bucket. If you specify the AmazonS3 folder location as S3-bucket/alexa-devices, the images in the folder echo are assigned the labelecho. Similarly, images in the folder echo-dot are assigned the label echo-dot. The names of deeperchild folders aren't used to label images. Instead, the appropriate child folder of the Amazon S3 folder

44

Page 50: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAmazon S3 Bucket

location is used. For example, images in the folder white-echo-dots are assigned the label echo-dot.Images at the level of the S3 folder location (alexa-devices) don't have labels assigned to them.

Folders deeper in the folder structure can be used to label images by specifying a deeper S3 folderlocation. For example, If you specify S3-bucket/alexa-devices/echo-dot, Images in the folder white-echo-dot are labeled white-echo-dot. Images outside the specified s3 folder location, such as echo, aren'timported.

S3-bucket### alexa-devices ### echo #   ### echo-image-1.png #   ### echo-image-2.png # ### . # ### . ### echo-dot ### white-echo-dot # ### white-echo-dot-image-1.png # ### white-echo-dot-image-2.png # ### echo-dot-image-1.png ### echo-dot-image-2.png ### . ### .

We recommend that you use the Amazon S3 bucket (console bucket) created for you by AmazonRekognition when you first opened the console in the current AWS region. If the Amazon S3 bucket thatyou are using is different (external) to the console bucket, the console prompts you to set up appropriatepermissions during dataset creation. For more information, see the section called “Step 5: Set UpConsole Permissions” (p. 10).

Creating a Dataset by Importing Images from an S3 BucketThe following procedure shows you how to create a dataset using images stored in the Console S3bucket. The images are automatically labeled with the name of the folder in which they are stored.

After you have imported your images, you can add more images, assign labels, and add bounding boxesfrom a dataset's gallery page. For more information, see Review Your Images (p. 40).

You choose which datasets to use for training and testing when you start training. For more information,see Training an Amazon Rekognition Custom Labels Model (p. 67).

Upload your images to an Amazon Simple Storage Service bucket

1. Create a folder on your local file system. Use a folder name such as alexa-devices.

2. Within the folder you just created, create folders named after each label that you want to use. Forexample, echo and echo-dot. The folder structure should be similar to the following.

alexa-devices### echo#   ### echo-image-1.png#   ### echo-image-2.png# ### .# ### .### echo-dot ### echo-dot-image-1.png ### echo-dot-image-2.png ### .

45

Page 51: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAmazon S3 Bucket

### .

3. Place the images that correspond to a label into the folder with the same label name.

4. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

5. Add the folder you created in step 1 to the Amazon S3 bucket (console bucket) created for you byAmazon Rekognition Custom Labels during First Time Set Up. For more information, see Step 2:Create Your First Project (p. 14).

6. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

7. Choose Use Custom Labels.

8. Choose Get started.

9. In the left navigation pane, choose Projects.

10. In the Projects page, choose the project to which you want to add a dataset.

11. In the Datasets section of the project page, choose Create dataset.

12. In the Create dataset details page, enter a name for your dataset.

13. Choose Import images from S3 bucket.

14. In S3 folder location, enter the S3 bucket location for the folder you used in step 5.

15. In Automatic labeling choose Automatically attach a label to my images based on the folderthey're stored in.

46

Page 52: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAmazon S3 Bucket

47

Page 53: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideLocal Computer

16. Choose Submit.

17. If you are training your model to detect objects in your images, add labeled bounding boxes to theimages in your new dataset. For more information, see Drawing Bounding Boxes (p. 65).

Local ComputerThe images are loaded directly from your computer. You can upload up to 30 images at a time. You needto add labels and bounding box information. For more information, see Step 3: Create a Dataset (p. 15).

Amazon SageMaker Ground TruthWith Amazon SageMaker Ground Truth, you can use workers from either Amazon Mechanical Turk, avendor company that you choose, or an internal, private workforce along with machine learning toenable you to create a labeled set of images. Amazon Rekognition Custom Labels imports AmazonSageMaker Ground Truth manifest files from an Amazon S3 bucket that you specify.

Amazon Rekognition Custom Labels supports the following Amazon SageMaker Ground Truth tasks.

• Image Classification

• Bounding Box

The files you import are the images and a manifest file. The manifest file contains label and boundingbox information for the images you import.

If your images and labels aren't in the format of an Amazon SageMaker Ground Truth manifest file, youcan create a Amazon SageMaker format manifest file and use it to import your labeled images. For moreinformation, see Creating a Manifest File (p. 53).

Amazon Rekognition needs permissions to access the Amazon S3 bucket where your images arestored. If you are using the console bucket set up for you by Amazon Rekognition Custom Labels, therequired permissions are already set up. If you are not using the console bucket, see External Amazon S3Buckets (p. 11).

The following procedure shows you how to create a dataset by using images labeled by an AmazonSageMaker Ground Truth job. The job output files are stored in your Amazon Rekognition Custom Labelsconsole bucket.

To create a dataset using images labeled by a Amazon SageMaker Ground Truth job (console)

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

2. In the console bucket, create a folder to hold your images.

NoteThe console bucket is created when you first open the Amazon Rekognition CustomLabels console in an AWS Region. For more information, see Step 2: Create Your FirstProject (p. 14).

3. Upload your images to the folder that you just created.

4. In the console bucket, create a folder to hold the output of the Ground Truth job.

5. Open the Amazon SageMaker console at https://console.aws.amazon.com/sagemaker/.

6. Create a Ground Truth labeling job. You'll need the Amazon S3 URLs for the folders you createdin step 2 and step 4. For more information, see Use Amazon SageMaker Ground Truth for DataLabeling.

48

Page 54: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAmazon SageMaker Ground Truth

7. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

8. Choose Use Custom Labels.

9. Choose Get started.

10. In the left navigation pane, choose Datasets.

11. In the Datasets section, choose Create dataset.

12. In the Create dataset details page, enter a name for your dataset.

13. For Image location choose Import images labeled by SageMaker Ground Truth.

14. In .manifest file location enter the location of the output.manifest file in the folder you createdin step 4. It should be in the sub-folder Ground-Truth-Job-Name/manifests/output.

49

Page 55: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAmazon SageMaker Ground Truth

50

Page 56: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideExisting Dataset

15. Choose Submit.

Existing DatasetIf you've previously created a dataset, you can copy the dataset contents to a new dataset.

To create a dataset using an existing Amazon Rekognition Custom Labels dataset (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

2. Choose Use Custom Labels.

3. Choose Get started.

4. In the left navigation pane, choose Datasets.

5. In the Datasets section, choose Create dataset.

6. In the Create dataset details page, enter a name for your dataset.

7. For Image location choose Copy an existing Amazon Rekognition Custom Labels dataset.

8. In Dataset choose the dataset that you want to copy.

51

Page 57: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideExisting Dataset

9. Choose Submit.

52

Page 58: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating a Manifest File

Creating a Manifest FileYou can create a test or training dataset by importing an Amazon SageMaker Ground Truth formatmanifest file. If your images are labeled in a format that isn't an Amazon SageMaker Ground Truthmanifest file, use the following information to create an Amazon SageMaker Ground Truth formatmanifest file.

Manifest files are in JSON lines format where each line is a complete JSON object representing thelabeling information for an image. Amazon Rekognition Custom Labels supports Amazon SageMakerGround Truth manifests with JSON lines in the following formats:

• Classification Job Output – Use to add image-level labels to an image. An image-level label definesthe class of scene, concept, or object (if object location information isn't needed) that's on an image.An image can have more that one image-level label. For more information, see Image-Level Labels inManifest Files (p. 56).

• Bounding Box Job Output – Use to label the class and location of one or more objects on an image. Formore information, see Object Localization in Manifest Files (p. 59).

Image-level and localization (bounding-box) JSON lines can be chained together in the same manifestfile.

NoteThe JSON line examples in this section are formatted for readability.

When you import a manifest file, Amazon Rekognition Custom Labels applies validation rules for limits,syntax, and semantics. For more information, see Validation Rules for Manifest Files (p. 62).

The images referenced by a manifest file must be located in the same Amazon S3 bucket as the manifestfile. You specify the location of an image in the source-ref field of a JSON line.

Amazon Rekognition needs permissions to access the Amazon S3 bucket where your images arestored. If you are using the Console Bucket set up for you by Amazon Rekognition Custom Labels, therequired permissions are already set up. If you are not using the console bucket, see External Amazon S3Buckets (p. 11).

The following procedure shows you how to create a dataset by importing an Amazon SageMaker formatmanifest file that is stored in the console bucket.

To create a dataset using an Amazon SageMaker Ground Truth format manifest file (console)

1. Create your Amazon SageMaker Ground Truth format manifest file. For more information, seeImage-Level Labels in Manifest Files (p. 56) and Object Localization in Manifest Files (p. 59).

2. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.

3. In the Console Bucket, create a folder to hold your manifest file.

NoteThe console bucket is created when you first open the Amazon Rekognition CustomLabels console in an AWS Region. For more information, see Step 2: Create Your FirstProject (p. 14).

4. Upload your manifest file to the folder that you just created.5. In the console bucket, create a folder to hold your images.6. Upload your images to the folder you just created.

ImportantThe source-ref field value in each JSON line must map to images in the folder.

53

Page 59: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating a Manifest File

7. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

8. Choose Use Custom Labels.

9. Choose Get started.

10. In the left navigation pane, choose Datasets.

11. In the Datasets section, choose Create dataset.

12. In the Create dataset details page, enter a name for your dataset.

13. For Image location, choose Import images labeled by SageMaker Ground Truth.

14. For .manifest file location, enter the location of your manifest file.

54

Page 60: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating a Manifest File

55

Page 61: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideImage-Level Labels in Manifest Files

15. Choose Submit.

Amazon Rekognition Custom Labels creates a dataset in the console bucket datasets folder. Youroriginal .manifest file remains unchanged.

Topics

• Image-Level Labels in Manifest Files (p. 56)

• Object Localization in Manifest Files (p. 59)

• Validation Rules for Manifest Files (p. 62)

Image-Level Labels in Manifest FilesTo import image-level labels (images labeled with scenes, concepts, or objects that don't requirelocalization information), you add Amazon SageMaker Ground Truth Classification Job Output formatJSON lines to a manifest file. A manifest file is made of one or more JSON lines, one for each image thatyou want to import.

To create a manifest file for image-level labels

1. Create an empty text file.

2. Add a JSON line for each image the that you want to import. Each JSON line should look similar tothe following.

{"source-ref":"s3://custom-labels-console-us-east-1-fa33cb9afd/gt-job/manifest/IMG_1133.png","TestCLConsoleBucket":0,"TestCLConsoleBucket-metadata":{"confidence":0.95,"job-name":"labeling-job/testclconsolebucket","class-name":"Echo Dot","human-annotated":"yes","creation-date":"2020-04-15T20:17:23.433061","type":"groundtruth/image-classification"}}

3. Save the file. You can use the extension .manifest, but it is not required.

4. Create a dataset using the manifest file that you created. For more information, see To create adataset using an Amazon SageMaker Ground Truth format manifest file (console) (p. 53).

Image-Level JSON Lines

In this section, we show you how to create a JSON line for a single image. Consider the following image.A scene for the following image might be called Sunrise.

56

Page 62: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideImage-Level Labels in Manifest Files

The JSON line for the preceding image, with the scene Sunrise, might be the following.

{ "source-ref": "s3://bucket/images/sunrise.png", "testdataset-classification_Sunrise": 1, "testdataset-classification_Sunrise-metadata": { "confidence": 1, "job-name": "labeling-job/testdataset-classification_Sunrise", "class-name": "Sunrise", "human-annotated": "yes", "creation-date": "2020-03-06T17:46:39.176Z", "type": "groundtruth/image-classification" }

57

Page 63: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideImage-Level Labels in Manifest Files

}

Note the following information.

source-ref

(Required) The Amazon S3 location of the image. The format is s3://BUCKET/OBJECT_PATH. Images inan imported dataset must be stored in the same Amazon S3 bucket.

testdataset-classification_Sunrise

(Required) The labelling job identifier. You choose the field name. The field value (1 in the precedingexample) is a job identifier. It is not used by Amazon Rekognition Custom Labels and can be any integervalue. There must be corresponding metadata identified by the field name with -metadata appended.For example, testdataset-classification_Sunrise-metadata.

testdataset-classification_Sunrise-metadata

(Required) Metadata about the labelling job. The field name must be the same as the labelling jobidentifier with -metadata appended.

confidence

(Required) Currently not used by Amazon Rekognition Custom Labels but a value between 0 and 1must be supplied.

job-name

(Optional) A name that you choose for the job that processes the image.

class-name

(Required) A class name that you choose for the scene or concept that applies to the image. Forexample, Sunrise.

human-annotated

(Required) true if the annotation was completed by a human, otherwise false.

creation-date

(Required) The Coordinated Universal Time (UTC) date and time that the label was created.

type

(Required) The type of processing that should be applied to the image. For image-level labels, thevalue is groundtruth/image-classification.

Adding Multiple Labels to an Image

You can add multiple labels to an image. For example, the follow JSON adds two labels, football and ballto a single image.

{ "source-ref": "S3 bucket location", "sport0":0, # FIRST label "sport0-metadata": { "class-name": "football", "confidence": 0.8,

58

Page 64: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideObject Localization in Manifest Files

"type":"groundtruth/image-classification", "job-name": "identify-sport", "human-annotated": "yes", "creation-date": "2018-10-18T22:18:13.527256" }, "sport1":1, # SECOND label "sport1-metadata": { "class-name": "ball", "confidence": 0.8, "type":"groundtruth/image-classification", "job-name": "identify-sport", "human-annotated": "yes", "creation-date": "2018-10-18T22:18:13.527256" }} # end of annotations for 1 image

Object Localization in Manifest FilesYou can import images labeled with object localization information by adding Amazon SageMakerGround Truth Bounding Box Job Output format JSON lines to a manifest file.

Localization information represents the location of an object on an image. The location is representedby a bounding box that surrounds the object. The bounding box structure contains the upper-leftcoordinates of the bounding box and the bounding box's width and height. A bounding box format JSONline includes bounding boxes for the locations of one or more an objects on an image and the class ofeach object on the image.

A manifest file is made of one or more JSON lines, each line contains the information for a single image.

To create a manifest file for object localization

1. Create an empty text file.

2. Add a JSON line for each image the that you want to import. Each JSON line should look similar tothe following.

{"source-ref": "s3://bucket/images/IMG_1186.png", "bounding-box": {"image_size": [{"width": 640, "height": 480, "depth": 3}], "annotations": [{ "class_id": 1, "top": 251, "left": 399, "width": 155, "height": 101}, {"class_id": 0, "top": 65, "left": 86, "width": 220, "height": 334}]}, "bounding-box-metadata": {"objects": [{ "confidence": 1}, {"confidence": 1}], "class-map": {"0": "Echo", "1": "Echo Dot"}, "type": "groundtruth/object-detection", "human-annotated": "yes", "creation-date": "2013-11-18 02:53:27", "job-name": "my job"}}

3. Save the file. You can use the extension .manifest, but it is not required.

4. Create a dataset using the file that you just created. For more information, see To create a datasetusing an Amazon SageMaker Ground Truth format manifest file (console) (p. 53).

Object Bounding Box JSON Lines

In this section, we show you how to create a JSON line for a single image. The following image showsbounding boxes around Amazon Echo and Amazon Echo Dot devices.

59

Page 65: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideObject Localization in Manifest Files

The following is the bounding box JSON line for the preceding image.

{ "source-ref": "s3://custom-labels-bucket/images/IMG_1186.png", "bounding-box": { "image_size": [{ "width": 640, "height": 480, "depth": 3 }], "annotations": [{ "class_id": 1, "top": 251, "left": 399, "width": 155, "height": 101 }, { "class_id": 0, "top": 65, "left": 86, "width": 220, "height": 334 }] }, "bounding-box-metadata": { "objects": [{ "confidence": 1 }, { "confidence": 1 }], "class-map": { "0": "Echo", "1": "Echo Dot" }, "type": "groundtruth/object-detection", "human-annotated": "yes", "creation-date": "2013-11-18 02:53:27", "job-name": "my job" }}

60

Page 66: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideObject Localization in Manifest Files

Note the following information.

source-ref

(Required) The Amazon S3 location of the image. The format is s3://BUCKET/OBJECT_PATH. Images inan imported dataset must be stored in the same Amazon S3 bucket.

bounding-box

(Required) The labelling job identifier. You choose the field name. Contains the image size and thebounding boxes for each object detected in the image. There must be corresponding metadata identifiedby the field name with -metadata appended. For example, bounding-box-metadata.

image_size

(Required) A single element array containing the size of the image in pixels.• height – (Required) The height of the image in pixels.• width – (Required) The depth of the image in pixels.• depth – (Required) The number of channels in the image. For RGB images, the value is 3. Not

currently used by Amazon Rekognition Custom Labels, but a value is required.annotations

(Required) An array of bounding box information for each object detected in the image.• class_id – (Required) Maps to the label in class-map. In the preceding example, the object with the

class_id of 1 is the Echo Dot in the image.• top – (Required) The distance from the top of the image to the top of the bounding box, in pixels.• left – (Required) The distance from the left of the image to the left of the bounding box, in pixels.• width – (Required) The width of the bounding box, in pixels.• height – (Required) The height of the bounding box, in pixels.

bounding-box-metadata

(Required) Metadata about the labelling job. The field name must be the same as the labelling jobidentifier with -metadata appended. An array of bounding box information for each object detected inthe image.

Objects

(Required) An array of objects that are in the image. Maps to the annotations array by index. Theconfidence attribute isn't used by Amazon Rekognition Custom Labels.

class-map

(Required) A map of the classes that apply to objects detected in the image.type

(Required) The type of classification job. groundtruth/object-detection identifies the job asobject detection.

creation-date

(Required) The Coordinated Universal Time (UTC) date and time that the label was created.human-annotated

(Required) true if the annotation was completed by a human, otherwise false.job-name

(Optional) The name of the job that processes the image.

61

Page 67: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideValidation Rules for Manifest Files

Validation Rules for Manifest FilesWhen you import a manifest file, Amazon Rekognition Custom Labels applies validation rules for limits,syntax, and semantics. The Amazon SageMaker Ground Truth schema enforces syntax validation. Formore information, see Output. The following are the validation rules for limits and semantics.

Note

• The 20% invalidity rules apply cumulatively across all validation rules. If the import exceedsthe 20% limit due to any combination, such as 15% invalid JSON and 15% invalid images, theimport fails.

• Each dataset object is a line in the manifest. Blank/invalid lines are also counted as datasetobjects.

• Overlaps are (common labels between test and train)/(train labels).

Topics• Limits (p. 62)• Semantics (p. 62)

Limits

Validation Limit Error raised

Manifest file size Maximum 1 GB Error

Maximum line count for amanifest file

Maximum of 250,000 datasetobjects as lines in a manifest.

Error

Lower boundary on totalnumber of valid dataset objectsper label

>= 1 Error

Lower boundary on labels >=2 Error

Upper bound on labels <= 250 Error

Minimum bounding boxes perimage

0 None

Maximum bounding boxes perimage

50 None

Semantics

Validation Limit Error raised

Empty manifest Error

Missing/in-accessible source-refobject

Number of objects less than20%

Warning

Missing/in-accessible source-refobject

Number of objects > 20% Error

62

Page 68: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideEditing Labels

Validation Limit Error raised

Test labels not present intraining dataset

At least 50% overlap in thelabels

Error

Mix of label vs. object examplesfor same label in a dataset.Classification and detectionfor the same class in a datasetobject.

No error or warning

Overlapping assets between testand train

There should not be an overlapbetween test and trainingdatasets.

Images in a dataset must befrom same bucket

Error if the objects are in adifferent bucket

Error

Editing LabelsYou can use the Amazon Rekognition Custom Labels console to add, change, or remove labels from adataset.

To add a label (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.2. Choose Use Custom Labels.3. Choose Get started.4. In the left navigation pane, choose the project that contains the dataset you want to use.5. In the Datasets section, choose the dataset that you want to use.6. In the dataset gallery page, choose Start labeling to enter labeling mode.7. In the Filter by labels section of the dataset gallery, choose Add to open the Edit labels dialog box.8. Enter a new label name.9. Choose Add label.

10. Repeat steps 7 and 8 until you have created all the labels you need.11. Create Save to save the labels that you added.

Assigning Image-Level Labels to an ImageTo train an Amazon Rekognition Custom Labels model that finds objects, scenes or concepts, yourdataset images need to be labeled with image-level labels. An image-level label indicates that an imagecontains an object, scene or concept.

63

Page 69: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAssigning Image-Level Labels to an Image

For example, the following image shows the Columbia river. To train a model to detect rivers, you wouldadd a 'river' label that applies to the entire image. You can add other labels, as needed.

A dataset that contains image-level labels, needs at least two labels defined. Each image needs at leastone assigned label that identifies the object, scene, or concept in the image.

To assign labels to an image (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

2. Choose Use Custom Labels.

3. Choose Get started.

4. In the left navigation pane, choose the project that contains the dataset you want to use.

5. In the Datasets section, choose the dataset that you want to use.

6. In the dataset gallery page, choose Start labeling to enter labeling mode.

7. Select one or more images that you want to add labels to. You can only select images on a singlepage at a time. To select the images between a first and second image on a page:

a. Select the first image.

b. Press and hold the shift key.

c. Select the second image. The images between the first and second image are also selected.

d. Release the shift key.

8. Choose Assign Labels.

64

Page 70: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDrawing Bounding Boxes

9. In Assign a label to selected images dialog box, select a label that you want to assign to the imageor images.

10. Choose Assign to assign label to the image.

11. Repeat labeling until every image is annotated with the required labels.

Drawing Bounding BoxesYou can train a model to detect the location of objects on an image. To do this, the dataset needsinformation about the location of the object(s) in the image, and a corresponding label that identifies thetype of the object(s). This is known as localization information. The location of the device is expressed asa bounding box, which is a box that tightly surrounds an object. The following image shows a boundingbox surrounding an Amazon Echo Dot. The image also contains an Amazon Echo without a bounding box.

The accuracy of your model is affected by the accuracy of the bounding boxes. Try to draw them ascloses as possible to the object.

Amazon Rekognition Custom Labels can detect logos and animated characters. The testing images mustcontain bounding boxes around the logo or animated character.

Before you can add bounding boxes, you must add at least one label to the dataset. For moreinformation, see Editing Labels (p. 63).

To draw a bounding box and assign a label (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.2. Choose Use Custom Labels.3. Choose Get started.

65

Page 71: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDrawing Bounding Boxes

4. In the left navigation pane, choose the project that contains the dataset that you want to use.5. In the Datasets section, choose the dataset that you want to use.6. In the dataset gallery page, choose Start labeling to enter labeling mode.7. Choose the images that you want to add bounding boxes to.8. Choose Draw bounding box. A series of tips are shown before the bounding box editor is displayed.9. In the Labels pane on the right, select the label that you want to assign to a bounding box.10. In the drawing tool, place the pointer at the top-left area of the desired object.11. Press the left mouse button and draw a box around the object. Try to draw the bounding box as

close as possible to the object.12. Release the mouse button and the bounding box is highlighted.13. Click Next if you have more images to label. Otherwise, choose Done to finish labeling.

14. Repeat steps 7–11 until you have created a bounding box in each image that contains objects.15. Choose Save changes to save your changes.16. Choose Exit to exit labeling mode.

66

Page 72: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining a Model

Managing an Amazon RekognitionCustom Labels Model

An Amazon Rekognition Custom Labels model is a mathematical model that predicts the presence ofobjects, scenes, and concepts in new images by finding patterns in images used to train the model. Thissection shows you how to train a model, evaluate its performance, and make improvements. It alsoshows you how to make a model available for use and how to delete a model when you no longer needit.

Topics• Training an Amazon Rekognition Custom Labels Model (p. 67)• Evaluating a Trained Amazon Rekognition Custom Labels Model (p. 73)• Improving an Amazon Rekognition Custom Labels Model (p. 85)• Running a Trained Amazon Rekognition Custom Labels Model (p. 86)• Deleting an Amazon Rekognition Custom Labels Model (p. 91)

Training an Amazon Rekognition Custom LabelsModel

Training a model requires labeled images and algorithms to train the model. Amazon RekognitionCustom Labels simplifies this process by choosing the algorithm for you. Accurately labeled images arevery important to the process because they influence the algorithm that is chosen to train the model.After training a model, you can evaluate its performance. For more information, see Evaluating a TrainedAmazon Rekognition Custom Labels Model (p. 73).

Training requires two datasets. A training dataset that's used to train the model. A testing dataset that'sused to verify how well the trained model predicts the correct custom labels. The testing dataset alsohelps determine the algorithm used for training. The images in the testing dataset shouldn't be imagesthat were previously used for training. They should include the objects, scenes, and concepts that themodel is trained to analyze. You can provide a testing dataset in the following ways.

• Use an existing Amazon Rekognition Custom Labels dataset.• Create a new dataset. For more information, see Creating an Amazon Rekognition Custom Labels

Dataset (p. 42).• Split your training dataset. You can use 20% of your training dataset to create a test dataset. The

images chosen are a representative sampling and aren't used in the training dataset. We recommendsplitting your training dataset only if you don't have an alternative dataset that you can use. Splitting atraining dataset reduces the number of images available for training.

You can train a model by using the Amazon Rekognition Custom Labels console, or by the AmazonRekognition Custom Labels API.

NoteYou are charged for the amount of time that it takes to successfully train a model. Typicallytraining takes from 30 minutes to 24 hours to complete. For more information, see Traininghours.

Topics

67

Page 73: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining a Model (Console)

• Training a Model (Console) (p. 68)• Training a Model (SDK) (p. 68)

Training a Model (Console)You can train a model by using the Amazon Rekognition Custom Labels console.

To train a model (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.2. Choose Use Custom Labels.3. Choose Get started.4. In the left navigation pane, choose Projects.5. On the Projects page, choose the project that contains the dataset you want to train.6. In the Models section, choose Train new model. The Train Model page is displayed.7. In Choose project, choose the model that you want to train.8. On the Train model page, do the following:

a. In Choose Project, choose the project you created earlier.b. In Choose training set, choose the training dataset that you want to use.c. In Create test set, choose how you want to create the test dataset, and follow the instructions.

9. Choose Train to start training the model. The project resources page is shown.10. Wait until training has finished. You can check the current status in the Status field of the Model

section.

Training a Model (SDK)You train a model by calling CreateProjectVersion. To train a model, the following information is needed:

• Name – A unique name for the model version.• Project ARN – The Amazon Resource Name (ARN) of the project that manages the model.

68

Page 74: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining a Model (SDK)

• Training dataset – An Amazon SageMaker Ground Truth–formatted manifest file and images that areused to train the model. You can create the dataset in the Amazon Rekognition Custom Labels console.Another option is to provide an external manifest file such as an Amazon SageMaker Ground Truthlabeling job. The dataset is stored in an Amazon S3 bucket.

• Testing dataset – Test images and labels used to evaluate the model after training. You can use adataset created in the Amazon Rekognition Custom Labels console (stored in the console's Amazon S3bucket). Alternatively, you can use an Amazon SageMaker Ground Truth format manifest file stored inan external Amazon S3 bucket. You can also split the training dataset to use it as a test dataset.

• Training results location – The Amazon S3 location where the results are placed. You can use the samelocation as the console Amazon S3 bucket, or you can choose a different location. We recommendchoosing a different location because this allows you to set permissions and avoid potential namingconflicts with training output from using the Amazon Rekognition Custom Labels console.

The response from CreateProjectVersion is an ARN that you use to identify the model version insubsequent requests. You can also use the ARN to secure the model version. Training a model versiontakes a while to complete. The Python and Java examples in this topic use waiters to wait for trainingto complete. A waiter is a utility method that polls for a particular state to occur. Alternatively, you canget the current status of training by calling DescribeProjectVersions. Training is completed whenthe Status field value is TRAINING_COMPLETED. After training is completed, you can evaluate model’squality by reviewing the evaluation results.

To train a model (SDK)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess and permissions. Formore information, see Step 2: Create an IAM Administrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

c. If you are using an external Amazon S3 bucket, set the permissions. For more information, seeStep 4: Set Up Amazon S3 Bucket Permissions for SDK Use (p. 8).

2. Use the following example code to train a project.

AWS CLI

The following example creates a model. The training dataset is split to create the test dataset.Replace the following:

• my_project_arn with the Amazon Resource Name (ARN) of the project.• version_name with a unique version name of your choosing.• output_bucket with the name of the Amazon S3 bucket where Amazon Rekognition

Custom Labels saves the training results.• output_folder with the name of the folder where the training results are saved.• training_bucket with the name of the Amazon S3 bucket that contains the manifest file

and images.• training_manifest with the name and path to the manifest file.

aws rekognition create-project-version\ --project-arn "my_project_arn"\ --version-name "version_name"\ --output-config '{"S3Bucket":"output bucket", "S3KeyPrefix":"output_folder"}'\ --training-data '{"Assets": [{ "GroundTruthManifest": { "S3Object": { "Bucket": "training_bucket", "Name": "training_manifest" } } } ] }'\ --testing-data '{"AutoCreate":true }'

69

Page 75: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining a Model (SDK)

Python

The following example creates a model. Replace the following:

• my_project_arn with the Amazon Resource Name (ARN) of the project.

• version_name with a unique version name of your choosing.

• output_bucket with the name of the Amazon S3 bucket where Amazon RekognitionCustom Labels saves the training results.

• output_folder with the name of the folder where the training results are saved.

• training_bucket with the name of the Amazon S3 bucket that contains the trainingmanifest file and images.

• training_manifest with the name and path to the training manifest file.

• The example splits the training dataset to create a test dataset. If you want to use your owntest dataset, do the following.

• Uncomment the following line of code:

#testing_dataset= json.loads('{"Assets": [{ "GroundTruthManifest": { "S3Object": { "Bucket": "testing_bucket", "Name": "testing_output" } } } ]}')

• Replace testing_bucket with the name of the Amazon S3 bucket that contains thetesting manifest file and images.

• Replace testing_manifest with the name and path to the testing manifest file.

#Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) import boto3 import json import time def train_model(project_arn, version_name, output_config, training_dataset,testing_dataset): client=boto3.client('rekognition') print('Starting training of: ' + version_name) try: response=client.create_project_version(ProjectArn=project_arn, VersionName=version_name, OutputConfig=output_config, TrainingData=training_dataset, TestingData=testing_dataset) # Wait for the project version training to complete project_version_training_completed_waiter = client.get_waiter('project_version_training_completed') project_version_training_completed_waiter.wait(ProjectArn=project_arn, VersionNames=[version_name]) #Get the completion status describe_response=client.describe_project_versions(ProjectArn=project_arn, VersionNames=[version_name]) for model in describe_response['ProjectVersionDescriptions']: print("Status: " + model['Status'])

70

Page 76: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining a Model (SDK)

print("Message: " + model['StatusMessage']) except Exception as e: print(e) print('Done...') def main(): project_arn='project_arn' version_name='version_name' output_config = json.loads('{"S3Bucket":"output_bucket", "S3KeyPrefix":"output_folder"}') training_dataset= json.loads('{"Assets": [{ "GroundTruthManifest": { "S3Object": { "Bucket": "training_bucket", "Name": "training-_manifest" } } } ] }') testing_dataset= json.loads('{"AutoCreate":true}') #testing_dataset= json.loads('{"Assets": [{ "GroundTruthManifest": { "S3Object": { "Bucket": "testing_bucket", "Name": "testing_output" } } } ]}') train_model(project_arn, version_name, output_config, training_dataset, testing_dataset) if __name__ == "__main__": main()

Java

The following example creates a model. Replace the following:

• projectArn with the Amazon Resource Name (ARN) of the project.

• versionName with a unique version name of your choosing.

• outputBucket with the name of the Amazon S3 bucket where Amazon Rekognition CustomLabels saves the training results.

• outputFolder with the name of the folder where the training results are saved.

• trainingBucket with the name of the Amazon S3 bucket that contains the trainingmanifest file and images.

• trainingManifest with the name and path to the training manifest file.

• The example splits the training dataset to create a test dataset. If you want to use your owntest dataset, do the following.

• Uncomment the following lines of code:

/* TestingData testingData = new TestingData() .withAssets(new Asset[] {new Asset() .withGroundTruthManifest(testingGroundTruthManifest)}); */

• Replace testingBucket with the name of the Amazon S3 bucket that contains the testingmanifest file and images.

• Replace testingManifest with the name and path to the testing manifest file.

//Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

package com.amazonaws.samples;

import com.amazonaws.services.rekognition.AmazonRekognition;

71

Page 77: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining a Model (SDK)

import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;import com.amazonaws.services.rekognition.model.Asset;import com.amazonaws.services.rekognition.model.CreateProjectVersionRequest;import com.amazonaws.services.rekognition.model.CreateProjectVersionResult;import com.amazonaws.services.rekognition.model.DescribeProjectVersionsRequest;import com.amazonaws.services.rekognition.model.DescribeProjectVersionsResult;import com.amazonaws.services.rekognition.model.GroundTruthManifest;import com.amazonaws.services.rekognition.model.OutputConfig;import com.amazonaws.services.rekognition.model.ProjectVersionDescription;import com.amazonaws.services.rekognition.model.S3Object;import com.amazonaws.services.rekognition.model.TestingData;import com.amazonaws.services.rekognition.model.TrainingData;import com.amazonaws.waiters.Waiter;import com.amazonaws.waiters.WaiterParameters;

public class TrainModel {

public static void main(String[] args) throws Exception {

String projectArn = "project_arn";

String versionName="version_name"; String outputBucket="output_bucket"; String outputFolder="output_folder"; String trainingBucket="training_bucket"; String trainingManifest="training_manifest"; String testingBucket="testing_bucket"; String testingManifest="testing_manifest";

OutputConfig outputConfig = new OutputConfig() .withS3Bucket(outputBucket) .withS3KeyPrefix(outputFolder);

GroundTruthManifest trainingGroundTruthManifest = new GroundTruthManifest() .withS3Object(new S3Object() .withBucket(trainingBucket) .withName(trainingManifest));

GroundTruthManifest testingGroundTruthManifest = new GroundTruthManifest() .withS3Object(new S3Object() .withBucket(testingBucket) .withName(testingManifest));

TrainingData trainingData = new TrainingData() .withAssets(new Asset[] {new Asset() .withGroundTruthManifest(trainingGroundTruthManifest)});

TestingData testingData=new TestingData() .withAutoCreate(true);

/* TestingData testingData = new TestingData() .withAssets(new Asset[] {new Asset() .withGroundTruthManifest(testingGroundTruthManifest)}); */

AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();

72

Page 78: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideEvaluating a Trained Model

CreateProjectVersionRequest request = new CreateProjectVersionRequest() .withOutputConfig(outputConfig) .withProjectArn(projectArn) .withTrainingData(trainingData) .withTestingData(testingData) .withVersionName(versionName);

try{

CreateProjectVersionResult result = rekognitionClient.createProjectVersion(request);

System.out.println("Model ARN: " + result.getProjectVersionArn());

System.out.println("Training model...");

DescribeProjectVersionsRequest describeProjectVersionsRequest = new DescribeProjectVersionsRequest() .withVersionNames(versionName) .withProjectArn(projectArn);

Waiter<DescribeProjectVersionsRequest> waiter = rekognitionClient.waiters().projectVersionTrainingCompleted(); waiter.run(new WaiterParameters<>(describeProjectVersionsRequest));

DescribeProjectVersionsResult response=rekognitionClient.describeProjectVersions(describeProjectVersionsRequest);

for(ProjectVersionDescription projectVersionDescription: response.getProjectVersionDescriptions() ) { System.out.println("Status: " + projectVersionDescription.getStatus()); } System.out.println("Done...");

} catch(Exception e) { System.out.println(e.toString()); } System.out.println("Done..."); }}

Evaluating a Trained Amazon Rekognition CustomLabels Model

When training completes, you evaluate the performance of the model. To help you, Amazon RekognitionCustom Labels provides summary metrics and evaluation metrics for each label. For information aboutthe available metrics, see Metrics for Evaluating Your Model (p. 74). To improve your model usingmetrics, see Improving an Amazon Rekognition Custom Labels Model (p. 85).

If you're satisfied with the accuracy of your model, you can start to use it. For more information, seeRunning a Trained Amazon Rekognition Custom Labels Model (p. 86).

Topics• Metrics for Evaluating Your Model (p. 74)• Accessing Training Results (Console) (p. 76)

73

Page 79: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideMetrics for Evaluating Your Model

• Accessing Amazon Rekognition Custom Labels Training Results (SDK) (p. 77)

Metrics for Evaluating Your ModelAfter your model is trained, Amazon Rekognition Custom Labels returns a number of metrics from modeltesting that you can use to evaluate the performance of your model. This topic describes the metricsavailable to you, and how to understand if your trained model is performing well.

The Amazon Rekognition Custom Labels console provides the following metrics as a summary of thetraining results and as metrics for each label:

• Precision (p. 75)

• Recall (p. 75)

• Overall Model Performance (p. 75)

Each metric we provide is a commonly used metric for evaluating the performance of a Machine Learningmodel. Amazon Rekognition Custom Labels returns metrics for the results of testing across the entiretest dataset, as well as metrics for each custom label. You are also able to review the performance ofyour trained custom model for each image in your test dataset.

Evaluating Model Performance

During testing, Amazon Rekognition Custom Labels predicts if a test image contains a custom label. Theconfidence score is a value that quantifies the certainty of the model’s prediction.

If the confidence score for a custom label exceeds the threshold value, the model output will include thislabel. Predictions can be categorized in the following ways:

• True positive – The Amazon Rekognition Custom Labels model correctly predicts the presence of thecustom label in the test image. That is, the predicted label is also a "ground truth" label for that image.For example, Amazon Rekognition Custom Labels correctly returns a soccer ball label when a soccerball is present in an image.

• False positive – The Amazon Rekognition Custom Labels model incorrectly predicts the presence of acustom label in a test image. That is, the predicted label isn’t a ground truth label for the image. Forexample, Amazon Rekognition Custom Labels returns a soccer ball label, but there is no soccer balllabel in the ground truth for that image.

• False negative – The Amazon Rekognition Custom Labels model doesn't predict that a custom label ispresent in the image, but the "ground truth" for that image includes this label. For example, AmazonRekognition Custom Labels doesn’t return a ‘soccer ball’ custom label for an image that contains asoccer ball.

• True negative – The Amazon Rekognition Custom Labels model correctly predicts that a custom labelisn't present in the test image. For example, Amazon Rekognition Custom Labels doesn’t return asoccer ball label for an image that doesn’t contain a soccer ball.

The console provides access to true positive, false positive, and false negative values for each image inyour test dataset.

These prediction results are used to calculate the following metrics for each label, as well as anaggregate for your entire test set. The same definitions apply to predictions made by the model atthe bounding box level, with the distinction that all metrics are calculated over each bounding box(prediction or ground truth) in each test image.

74

Page 80: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideMetrics for Evaluating Your Model

Assumed ThresholdAmazon Rekognition Custom Labels automatically calculates an assumed threshold value (0-1) for eachof your custom labels. The assumed threshold for each label is the value above which a prediction iscounted as a true or false positive. It is set based on your test dataset.

Precision

Amazon Rekognition Custom Labels provides precision metrics for each label and an average precisionmetric for the entire test dataset.

Precision is the fraction of correct predictions (true positives) over all model predictions (true and falsepositives) at the assumed threshold for an individual label. As the threshold is increased, the modelmight make fewer predictions. In general, however, it will have a higher ratio of true positives over falsepositives compared to a lower threshold. Possible values for precision range from 0–1, and higher valuesindicate higher precision.

For example, when the model predicts that a soccer ball is present in an image, how often is thatprediction correct? Suppose there’s an image with 8 soccer balls and 5 rocks. If the model predicts 9soccer balls—8 correctly predicted and 1 false positive—then the precision for this example is 0.89.However, if the model predicted 13 soccer balls in the image with 8 correct predictions and 5 incorrect,then the resulting precision is lower.

For more information, see Precision and recall.

Recall

Amazon Rekognition Custom Labels provides average recall metrics for each label and an average recallmetric for the entire test dataset.

Recall is the fraction of your test set labels that were predicted correctly above the assumed threshold. Itis a measure of how often the model can predict a custom label correctly when it's actually present in theimages of your test set. The range for recall is 0–1. Higher values indicate a higher recall.

For example, if an image contains 8 soccer balls, how many of them are detected correctly? In thepreceding example where an image has 8 soccer balls and 5 rocks, if the model detects 5 of the soccerballs, then the recall value is 0.62. If after retraining, the new model detected 9 soccer balls, including all8 that were present in the image, then the recall value is 1.0.

For more information, see Precision and recall.

Overall Model Performance

Amazon Rekognition Custom Labels provides an average model performance score for each label and anaverage model performance score for the entire test dataset.

Model performance is an aggregate measure that takes into account both precision and recall over alllabels (for example, F1 score or average precision). The model performance score is a value between 0and 1. The higher the value, the better the model is performing for both recall and precision. Specifically,model performance for classification tasks is commonly measured by F1 score, which is the harmonicmean of the precision and recall scores at the assumed threshold. For example, for a model withprecision of 0.9 and a recall of 1.0, the F1 score if 0.947.

A high value for F1 score indicates that the model is performing well for both precision and recall. If themodel isn't performing well, for example, with a low precision of 0.30 and a high recall of 1.0, the F1score is 0.46. Similarly if the precision is high (0.95) and the recall is low (0.20), the F1 score is 0.33. Inboth cases, the F1 score is low and indicates problems with the model.

75

Page 81: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (Console)

For more information, see F1 score.

Using Metrics

For a given model that you have trained and depending on your application, you can make a trade-offbetween precision and recall by changing the threshold. At a higher threshold, you will generally gethigher precision (more correct predictions of soccer balls), but lower recall (more actual soccer balls willbe missed). At a lower threshold, you will get higher recall (more actual soccer balls will be correctlypredicted), but lower precision (more of the soccer ball predictions will be wrong).

The metrics also inform you on the steps you might take to improve model performance if needed. Formore information, see Improving an Amazon Rekognition Custom Labels Model (p. 85).

NoteDetectCustomLabels returns predictions ranging from 0 to 100 which correspond to themetric range of 0-1.

Accessing Training Results (Console)You can use the Amazon Rekognition Custom Labels console to see training results for a model.

To access training results (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

2. Choose Use Custom Labels.

3. Choose Get started.

4. In the left navigation pane, choose Projects.

5. On the Projects resources page, choose the project that contains the trained model you want toevaluate.

6. In the Model section, choose the model for which you want to see training results. The evaluationsummary results are shown.

7. Choose View test results to see the test results for individual images. A series of tips is shown beforethe test results are shown. For more information about image test results, see Metrics for EvaluatingYour Model (p. 74).

8. Use the metrics to evaluate the performance of the model. For more information, see Improving anAmazon Rekognition Custom Labels Model (p. 85).

76

Page 82: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

Accessing Amazon Rekognition Custom LabelsTraining Results (SDK)The Amazon Rekognition API provides metrics beyond those provided in the console.

Like the console, the API provides access to the following metrics as summary information for thetraining results and as training results for each label:

• Precision (p. 75)

• Recall (p. 75)

• Overall Model Performance (p. 75)

The average threshold for all labels and the threshold for individual labels is returned.

The API also provides the following metrics for classification and image detection (object location onimage).

• Confusion Matrix for image classification.

• Mean Average Precision (mAP) for image detection.

• Mean Average Recall (mAR) for image detection.

The API also provides true positive, false positive, false negative, and true negative values. For moreinformation, see Metrics for Evaluating Your Model (p. 74).

The aggregate F1 score metric is returned directly by the API. Other metrics are accessible from aSummary File (p. 77) and Evaluation Manifest Snapshot (p. 78) files stored in an Amazon S3bucket.

For example code, see Accessing the Summary File and Evaluation Manifest Snapshot (SDK) (p. 81).

Topics• Summary File (p. 77)

• Evaluation Manifest Snapshot (p. 78)

• Accessing the Summary File and Evaluation Manifest Snapshot (SDK) (p. 81)

• Reference: Training Results Summary File (p. 83)

Summary FileThe summary file contains training results information about the model as a whole and metricsfor each label. The metrics are precision, recall, F1 score. The threshold value for the model is alsosupplied. The summary file location is accessible from the EvaluationResult object returnedby DescribeProjectVersions. For more information, see Reference: Training Results SummaryFile (p. 83).

The following is an example summary file.

{ "Version": 1, "AggregatedEvaluationResults": { "ConfusionMatrix": [ {

77

Page 83: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

"GroundTruthLabel": "CAP", "PredictedLabel": "CAP", "Value": 0.9948717948717949 }, { "GroundTruthLabel": "CAP", "PredictedLabel": "WATCH", "Value": 0.008547008547008548 }, { "GroundTruthLabel": "WATCH", "PredictedLabel": "CAP", "Value": 0.1794871794871795 }, { "GroundTruthLabel": "WATCH", "PredictedLabel": "WATCH", "Value": 0.7008547008547008 } ], "F1Score": 0.9726959470546408, "Precision": 0.9719115848331294, "Recall": 0.9735042735042735 }, "EvaluationDetails": { "EvaluationEndTimestamp": "2019-11-21T07:30:23.910943", "Labels": [ "CAP", "WATCH" ], "NumberOfTestingImages": 624, "NumberOfTrainingImages": 5216, "ProjectVersionArn": "arn:aws:rekognition:us-east-1:nnnnnnnnn:project/my-project/version/v0/1574317227432" }, "LabelEvaluationResults": [ { "Label": "CAP", "Metrics": { "F1Score": 0.9794344473007711, "Precision": 0.9819587628865979, "Recall": 0.9769230769230769, "Threshold": 0.9879502058029175 }, "NumberOfTestingImages": 390 }, { "Label": "WATCH", "Metrics": { "F1Score": 0.9659574468085106, "Precision": 0.961864406779661, "Recall": 0.9700854700854701, "Threshold": 0.014450683258473873 }, "NumberOfTestingImages": 234 } ]}

Evaluation Manifest SnapshotThe evaluation manifest snapshot contains detailed information about the test results. The snapshotincludes the confidence rating for each prediction, and the classification of the prediction compared (truepositive, true negative, false positive, or false negative) to the actual classification of the image.

78

Page 84: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

The files are a snapshot since only images that could be used for testing and training are included.Images that can't be verified, such as images in the wrong format, aren't included in the manifest.The testing snapshot location is accessible from the TestingDataResult object returned byDescribeProjectVersions. The training snapshot location is accessible from TrainingDataResultobject returned by DescribeProjectVersions.

The snapshot is in Amazon SageMaker Ground Truth manifest output format with fields added toprovide additional information, such as the result of a detection's binary classification. The followingsnippet shows the additional fields.

"rekognition-custom-labels-evaluation-details": { "version": 1, "is-true-positive": true, "is-true-negative": false, "is-false-positive": false, "is-false-negative": false, "is-present-in-ground-truth": true "ground-truth-labelling-jobs": ["rekognition-custom-labels-training-job"]}

• version – The version of the rekognition-custom-labels-evaluation-details field formatwithin the manifest snapshot.

• is-true-positive... – The binary classification of the prediction based on how the confidence scorecompares to the minimum threshold for the label.

• is-present-in-ground-truth – True if the prediction made by the model is present in the ground truthinformation used for training, otherwise false. This value isn't based on whether the confidence scoreexceeds the minimum threshold calculated by the model.

• ground-truth-labeling-jobs – A list of ground truth fields in the manifest line that are used for training.

For information about the Amazon SageMaker Ground Truth manifest format, see Output.

The following is an example testing manifest snapshot that shows metrics for image classification andobject detection.

// For image classification{ "source-ref": "s3://test-bucket/dataset/beckham.jpeg", "rekognition-custom-labels-training-0": 1, "rekognition-custom-labels-training-0-metadata": { "confidence": 1.0, "job-name": "rekognition-custom-labels-training-job", "class-name": "Football", "human-annotated": "yes", "creation-date": "2019-09-06T00:07:25.488243", "type": "groundtruth/image-classification" }, "rekognition-custom-labels-evaluation-0": 1, "rekognition-custom-labels-evaluation-0-metadata": { "confidence": 0.95, "job-name": "rekognition-custom-labels-evaluation-job", "class-name": "Football", "human-annotated": "no", "creation-date": "2019-09-06T00:07:25.488243", "type": "groundtruth/image-classification", "rekognition-custom-labels-evaluation-details": { "version": 1, "ground-truth-labelling-jobs": ["rekognition-custom-labels-training-job"], "is-true-positive": true, "is-true-negative": false, "is-false-positive": false,

79

Page 85: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

"is-false-negative": false, "is-present-in-ground-truth": true } }}

// For object detection{ "source-ref": "s3://test-bucket/dataset/beckham.jpeg", "rekognition-custom-labels-training-0": { "annotations": [ { "class_id": 0, "width": 39, "top": 409, "height": 63, "left": 712 }, ... ], "image_size": [ { "width": 1024, "depth": 3, "height": 768 } ] }, "rekognition-custom-labels-training-0-metadata": { "job-name": "rekognition-custom-labels-training-job", "class-map": { "0": "Cap", ... }, "human-annotated": "yes", "objects": [ { "confidence": 1.0 }, ... ], "creation-date": "2019-10-21T22:02:18.432644", "type": "groundtruth/object-detection" }, "rekognition-custom-labels-evaluation": { "annotations": [ { "class_id": 0, "width": 39, "top": 409, "height": 63, "left": 712 }, ... ], "image_size": [ { "width": 1024, "depth": 3, "height": 768 } ] }, "rekognition-custom-labels-evaluation-metadata": { "confidence": 0.95,

80

Page 86: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

"job-name": "rekognition-custom-labels-evaluation-job", "class-map": { "0": "Cap", ... }, "human-annotated": "no", "objects": [ { "confidence": 0.95, "rekognition-custom-labels-evaluation-details": { "version": 1, "ground-truth-labelling-jobs": ["rekognition-custom-labels-training-job"], "is-true-positive": true, "is-true-negative": false, "is-false-positive": false, "is-false-negative": false, "is-present-in-ground-truth": true } }, ... ], "creation-date": "2019-10-21T22:02:18.432644", "type": "groundtruth/object-detection" }}

Accessing the Summary File and Evaluation Manifest Snapshot(SDK)To get training results, you call DescribeProjectVersions. By default, DescribeProjectVersionsreturns information about all model versions in a project. To specify a specific model version, add themodel version ARN to the ProjectVersionArns input parameter.

The location of the metrics is returned in the ProjectVersionDescription response fromDescribeProjectVersions.

• EvaluationResult – The location of the summary file.• TestingDataResult – The location of the evaluation manifest snapshot used for testing.

NoteThe amount of time, in seconds, that you are billed for training is returned inBillableTrainingTimeInSeconds.

For information about the metrics that are returned by the Amazon Rekognition Custom Labels, seeAccessing Amazon Rekognition Custom Labels Training Results (SDK) (p. 77).

To access training results (SDK)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess permissions. For moreinformation, see Step 2: Create an IAM Administrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Use the following examples to get the locations of the summary file and the evaluation manifestsnapshot. Replace project_arn with the Amazon Resource Name (ARN) of the project thatcontains the model. For more information, see Creating an Amazon Rekognition Custom LabelsProject (p. 32). Replace version_name with the name of the model version. For more information,see Training a Model (SDK) (p. 68).

81

Page 87: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

CLI

aws rekognition describe-project-versions --project-arn "project_arn"\ --version-names "version_name"

Python

#Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.#PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

import boto3import iofrom io import BytesIOimport sysimport json

def describe_model(project_arn, version_name):

client=boto3.client('rekognition') response=client.describe_project_versions(ProjectArn=project_arn, VersionNames=[version_name])

for model in response['ProjectVersionDescriptions']: print(json.dumps(model,indent=4,default=str)) def main():

project_arn='project_arn' version_name='version_name'

describe_model(project_arn, version_name)

if __name__ == "__main__": main()

Java

//Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)package com.amazonaws.samples;import com.amazonaws.services.rekognition.AmazonRekognition;import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;import com.amazonaws.services.rekognition.model.DescribeProjectVersionsRequest;import com.amazonaws.services.rekognition.model.DescribeProjectVersionsResult;import com.amazonaws.services.rekognition.model.ProjectVersionDescription;import com.fasterxml.jackson.databind.ObjectMapper;import com.fasterxml.jackson.databind.SerializationFeature;

public class DescribeModel {

public static void main(String[] args) throws Exception {

String projectArn = "project_arn"; String versionName="version_name";

82

Page 88: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); DescribeProjectVersionsRequest describeProjectVersionsRequest = new DescribeProjectVersionsRequest() .withProjectArn(projectArn) .withVersionNames(versionName);

ObjectMapper mapper = new ObjectMapper(); mapper.enable(SerializationFeature.INDENT_OUTPUT); DescribeProjectVersionsResult response=rekognitionClient.describeProjectVersions(describeProjectVersionsRequest);

for(ProjectVersionDescription projectVersionDescription: response.getProjectVersionDescriptions() ) { System.out.println(mapper.writeValueAsString(projectVersionDescription)); } }

}

The F1 score and summary file location are returned in EvaluationResult. For example:

"EvaluationResult": { "F1Score": 1.0, "Summary": { "S3Object": { "Bucket": "echo-dot-scans", "Name": "test-output/EvaluationResultSummary-my-echo-dots-project-v2.json" } } } }

The evaluation manifest snapshot is stored in the location specified in the --output-config inputparameter that you specified in Training a Model (SDK) (p. 68).

Reference: Training Results Summary FileThe training results summary contains metrics you can use to evaluate your model. The summary file isalso used to display metrics in the console training results page. The summary file is stored in an AmazonS3 bucket after training. To get the summary file, call DescribeProjectVersion. For example code,see Accessing the Summary File and Evaluation Manifest Snapshot (SDK) (p. 81).

Summary File

The following JSON is the format of the summary file.

EvaluationDetails (section 3)

Overview information about the training task. This includes the ARN of the project that the modelbelongs to (ProjectVersionArn), the date and time that training finished, the version of the modelthat was evaluated (EvaluationEndTimestamp), and a list of labels detected during training (Labels).Also included is the number of images used for training (NumberOfTrainingImages) and evaluation(NumberOfTestingImages).

83

Page 89: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideAccessing Training Results (SDK)

AggregatedEvaluationResults (section 1)

You can use AggregatedEvaluationResults to evaluate the overall performance of the trainedmodel when used with the testing dataset. Aggregated metrics are included for Precision, Recall,and F1Score metrics. For object detection (the object location on an image), AverageRecall (mAR)and AveragePrecision (mAP) metrics are returned. For classification (the type of object in an image), aconfusion matrix metric is returned.

LabelEvaluationResults (section 2)

You can use labelEvaluationResults to evaluate the performance of individual labels. The labelsare sorted by the F1 score of each label. The metrics included are Precision, Recall, F1Score, andThreshold (used for classification).

The file name is formatted as follows: EvaluationSummary-ProjectName-VersionName.json.

{ "Version": "integer", // section-3 "EvaluationDetails": { "ProjectVersionArn": "string", "EvaluationEndTimestamp": "string", "Labels": "[string]", "NumberOfTrainingImages": "int", "NumberOfTestingImages": "int" }, // section-1 "AggregatedEvaluationResults": { "Metrics": { "Precision": "float", "Recall": "float", "F1Score": "float", // The following 2 fields are only applicable to object detection "AveragePrecision": "float", "AverageRecall": "float", // The following field is only applicable to classification "ConfusionMatrix":[ { "GroundTruthLabel": "string", "PredictedLabel": "string", "Value": "float" }, ... ], } }, // section-2 "LabelEvaluationResults": [ { "Label": "string", "NumberOfTestingImages", "int", "Metrics": { "Threshold": "float", "Precision": "float", "Recall": "float", "F1Score": "float" }, }, ... ]}

84

Page 90: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideImproving a Model

Improving an Amazon Rekognition Custom LabelsModel

The performance of machine learning models is largely dependent on factors such as the complexityand variability of your custom labels (the specific objects and scenes you are interested in), the qualityand representative power of the training dataset you provide, and the model frameworks and machinelearning methods used to train the model.

Amazon Rekognition Custom Labels, makes this process simpler, and no machine learning expertise isrequired. However, the process of building a good model often involves iterations over data and modelimprovements to achieve desired performance. The following is information on how to improve yourmodel.

DataIn general, you can improve the quality of your model with larger quantities of better quality data. Usetraining images that clearly show the object or scene and aren't cluttered with things you don’t careabout. For bounding boxes around objects, use training images that show the object fully visible and notoccluded by other objects.

Make sure that your training and test datasets match the type of images that you will eventually runinference on. For objects, such as logos, where you have just a few training examples, you should providebounding boxes around the logo in your test images. These images represent or depict the scenarios inwhich you want to localize the object.

Reducing False Positives (Better Precision)• First, check if increasing the confidence threshold lets you keep the correct predictions, while

eliminating false positives. At some point, this has diminishing gains because of the trade-off betweenprecision and recall for a given model.

• You might see one or more of your custom labels of interest (A) consistently get confused with thesame class of objects (but not a label that you're interested in) (B). To help, add B as an object classlabel to your training dataset (along with the images that you got the false positive on). Effectively,you're helping the model learn to predict B and not A through the new training images.

• You might find that the model is confused between two of your custom labels (A and B)—the testimage with label A is predicted as having label B and vice versa. In this case, first check for mislabeledimages in your training and test sets. Also, adding more training images that reflect this confusion willhelp a retrained model learn to better discriminate between A and B.

Reducing False Negatives (Better Recall)• Lower the confidence threshold to improve recall.

• Use better examples to model the variety of both the object and the images in which they appear.

• Split your label into two classes that are easier to learn. For example, instead of good cookies and badcookies, you might want good cookies, burnt cookies, and broken cookies to help the model learn eachunique concept better.

85

Page 91: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideRunning a Trained Model

Running a Trained Amazon Rekognition CustomLabels Model

When you're satisfied with the performance of the model, you can start to use it. To use a model, youmust first start it by using the StartProjectVersion API operation. The console provides example code foryou to use.

You are charged for the amount of time, in minutes, that the model is running. For more information, seeInference hours.

A single inference unit represents 1 hour of model use. A single inference unit can support up to 5transactions per second (TPS). Use a higher number to increase the TPS throughput of your model. Youare charged for the number of inference units that you use. You specify the number of inference units touse in the MinInferenceUnits input parameter.

Starting a model might take a few minutes to complete. To check the current status of the modelreadiness, use DescribeProjectVersions.

After the model is started you use DetectCustomLabels, to analyze images using the model. For moreinformation, see Analyzing an Image with a Trained Model (p. 96). The console also provides examplecode to call DetectCustomLabels.

Topics• Starting or Stopping an Amazon Rekognition Custom Labels Model (Console) (p. 86)

• Starting an Amazon Rekognition Custom Labels Model (SDK) (p. 87)

• Stopping an Amazon Rekognition Custom Labels Model (SDK) (p. 89)

Starting or Stopping an Amazon Rekognition CustomLabels Model (Console)The Amazon Rekognition Custom Labels console provides example code that you can use to start andstop a model.

To start or stop a model (console)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess permissions. For moreinformation, see Step 2: Create an IAM Administrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

3. Choose Use Custom Labels.

4. Choose Get started.

5. In the left navigation pane, choose Projects.

6. On the Projects resources page, choose the project that contains the trained model that you want tostart or stop.

7. In the Models section, choose the model that you want to start or stop. The summary results areshown.

8. In the Use model section, choose API Code.

86

Page 92: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStarting a Model (SDK)

9. At the command prompt, use the AWS CLI command that calls start-project-version to startyour model. Use the code snippet that calls stop-project-version to stop your model. The valueof --project-version-arn should be the Amazon Resource Name (ARN) of your model.

10. Choose your project name at the top of the page to go back to the project overview page.11. In the Model section, check the status of the model. When the status is The model is running, you

can use the model to analyze images.

Starting an Amazon Rekognition Custom LabelsModel (SDK)You start a model by calling the StartProjectVersion API and passing the Amazon Resource Name (ARN)of the model in the ProjectVersionArn input parameter. You also specify the number of inferenceunits that you want to use. For more information, see Running a Trained Amazon Rekognition CustomLabels Model (p. 86).

A model might take a while to start. The Python and Java examples in this topic use waiters to wait forthe model to start. A waiter is a utility method that polls for a particular state to occur. Alternatively, youcan check the current status by calling DescribeProjectVersions.

To start a model (SDK)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess and permissions. Formore information, see Step 2: Create an IAM Administrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Use the following example code to start a model.

CLI

Change the value of project-version-arn to the ARN of the model that you want to start.Change the value of --min-inference-units to the number of inference units that you wantto use.

aws rekognition start-project-version --project-version-arn model_arn\ --min-inference-units "1"

Python

Change the following values:

• project_arn to the ARN of the project that contains the model that you want to use.• model_arn to the ARN of the model that you want to start.• version_name to the name of the model that you want to start.• min_inference_units to the number of inference units that you want to use.

#Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) import boto3 def start_model(project_arn, model_arn, version_name, min_inference_units):

87

Page 93: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStarting a Model (SDK)

client=boto3.client('rekognition') try: # Start the model print('Starting model: ' + model_arn) response=client.start_project_version(ProjectVersionArn=model_arn, MinInferenceUnits=min_inference_units) # Wait for the model to be in the running state project_version_running_waiter = client.get_waiter('project_version_running') project_version_running_waiter.wait(ProjectArn=project_arn, VersionNames=[version_name]) #Get the running status describe_response=client.describe_project_versions(ProjectArn=project_arn, VersionNames=[version_name]) for model in describe_response['ProjectVersionDescriptions']: print("Status: " + model['Status']) print("Message: " + model['StatusMessage']) except Exception as e: print(e) print('Done...') def main(): project_arn='project_arn' model_arn='model_arn' min_inference_units=1 version_name='version_name' start_model(project_arn, model_arn, version_name, min_inference_units) if __name__ == "__main__": main()

Java

Change the following values:

• projectArn to the ARN of the project that contains the model that you want to use.

• projectVersionArn to the ARN of the model that you want to start.

• versionName to the name of the model that you want to start.

• minInferenceUnits to the number of inference units that you want to use.

//Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

package com.amazonaws.samples;

import com.amazonaws.services.rekognition.AmazonRekognition;import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;import com.amazonaws.services.rekognition.model.DescribeProjectVersionsRequest;import com.amazonaws.services.rekognition.model.DescribeProjectVersionsResult;import com.amazonaws.services.rekognition.model.ProjectVersionDescription;import com.amazonaws.services.rekognition.model.StartProjectVersionRequest;import com.amazonaws.services.rekognition.model.StartProjectVersionResult;import com.amazonaws.waiters.Waiter;import com.amazonaws.waiters.WaiterParameters;

88

Page 94: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStopping a Model (SDK)

public class StartModel {

public static void main(String[] args) throws Exception {

String projectVersionArn ="model_arn"; String projectArn = "project_arn"; int minInferenceUnits=1; String versionName="version_name";

AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); StartProjectVersionRequest request = new StartProjectVersionRequest() .withMinInferenceUnits(minInferenceUnits) .withProjectVersionArn(projectVersionArn); try{ StartProjectVersionResult result = rekognitionClient.startProjectVersion(request); System.out.println("Status: " + result.getStatus()); DescribeProjectVersionsRequest describeProjectVersionsRequest = new DescribeProjectVersionsRequest() .withVersionNames(versionName) .withProjectArn(projectArn); Waiter<DescribeProjectVersionsRequest> waiter = rekognitionClient.waiters().projectVersionRunning(); waiter.run(new WaiterParameters<>(describeProjectVersionsRequest));

DescribeProjectVersionsResult response=rekognitionClient.describeProjectVersions(describeProjectVersionsRequest);

for(ProjectVersionDescription projectVersionDescription: response.getProjectVersionDescriptions() ) { System.out.println("Status: " + projectVersionDescription.getStatus()); } System.out.println("Done...");

} catch(Exception e) { System.out.println(e.toString()); }

}}

Stopping an Amazon Rekognition Custom LabelsModel (SDK)You stop a model by calling the StopProjectVersion API and passing the Amazon Resource Name (ARN)of the model in the ProjectVersionArn input parameter.

A model might take a while to stop. To check the current status, use DescribeProjectVersions.

89

Page 95: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideStopping a Model (SDK)

To stop a model (SDK)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess permissions. For moreinformation, see Step 2: Create an IAM Administrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Use the following example code to stop a running model.

CLI

Change the value of project-version-arn to the ARN of the model that you want to stop.

aws rekognition stop-project-version --project-version-arn "my project"

Python

The following example stops a model that is already running.

Change the value of model_arn to the name of the model that you want to stop.

#Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) import boto3 import time def stop_model(model_arn): client=boto3.client('rekognition') print('Stopping model:' + model_arn) #Stop the model try: response=client.stop_project_version(ProjectVersionArn=model_arn) status=response['Status'] print ('Status: ' + status) except Exception as e: print(e) print('Done...') def main(): model_arn='model_arn' stop_model(model_arn) if __name__ == "__main__": main()

Java

The following example stops a model that is already running.

Change the value of projectVersionArn to the name of the model that you want to stop.

90

Page 96: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Model

//Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

package com.amazonaws.samples;import com.amazonaws.services.rekognition.AmazonRekognition;import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;import com.amazonaws.services.rekognition.model.StopProjectVersionRequest;import com.amazonaws.services.rekognition.model.StopProjectVersionResult;

public class StopModel {

public static void main(String[] args) throws Exception {

String projectVersionArn ="model_arn";

AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); try { StopProjectVersionRequest request = new StopProjectVersionRequest() .withProjectVersionArn(projectVersionArn); StopProjectVersionResult result = rekognitionClient.stopProjectVersion(request); System.out.println(result.getStatus());

} catch(Exception e) { System.out.println(e.toString()); } System.out.println("Done..."); }}

Deleting an Amazon Rekognition Custom LabelsModel

You can delete a model by using the Amazon Rekognition Custom Labels console or by using theDeleteProjectVersion API. You can't delete a model if it is running or if it is training. To stop a runningmodel, use the StopProjectVersion API. For more information, see Stopping an Amazon RekognitionCustom Labels Model (SDK) (p. 89). If a model is training, wait until it finishes before you delete themodel.

A deleted model can't be undeleted.

Topics

• Deleting an Amazon Rekognition Custom Labels Model (Console) (p. 92)

• Deleting an Amazon Rekognition Custom Labels Model (SDK) (p. 93)

91

Page 97: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Model (Console)

Deleting an Amazon Rekognition Custom LabelsModel (Console)The following procedure shows how to delete a model(s) from a project details page. You can also deletea model from a model's detail page.

To delete a model (console)

1. Open the Amazon Rekognition console at https://console.aws.amazon.com/rekognition/.

2. Choose Use Custom Labels.

3. Choose Get started.

4. In the left navigation pane, choose Projects.

5. Choose the project that contains the model that you want to delete. The project details page opens.

6. In the Models section, select the model(s) that you want to delete.

NoteIf the model can't be selected, the model is either running or is training, and can't bedeleted. Check the Status field and try again after stopping the running model, or wait untiltraining finishes.

7. Choose Delete model and the Delete model dialog box is shown.

8. Enter delete to confirm deletion.

9. Choose Delete to delete the model. Deleting the model might take a while to complete.

NoteIf you Close the dialog box during model deletion, the models are still deleted.

92

Page 98: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Model (SDK)

Deleting an Amazon Rekognition Custom LabelsModel (SDK)You delete an Amazon Rekognition Custom Labels model by calling DeleteProjectVersion and supplyingthe Amazon Resource Name (ARN) of the model that you want to delete. You can get the model ARNfrom the Use your model section of the model details page in the Amazon Rekognition Custom Labelsconsole. Alternatively, call DescribeProjectVersions and supply the following.

• The ARN of the project (ProjectArn) that the model is associated with.• The version name (VersionNames) of the model.

The model ARN is the ProjectVersionArn field in the ProjectVersionDescription object, from theDescribeProjectVersions response.

You can't delete a model if it is running or is training. To determine if the model is running or training,call DescribeProjectVersions and check the Status field of the model's ProjectVersionDescription object.To stop a running model, use the StopProjectVersion API. For more information, see Stopping an AmazonRekognition Custom Labels Model (SDK) (p. 89). You have to wait for a model to finishing trainingbefore you can delete it.

To delete a model (SDK)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess permissions. For moreinformation, see Step 2: Create an IAM Administrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Use the following code to delete a model.

AWS CLI

Change the value of project-version-arn to the name of the project you want to delete.

aws rekognition delete-project-version --project-version-arn model_arn

93

Page 99: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Model (SDK)

Python

Change the value of model_arn to the name of the model you want to delete.

#Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) import boto3 def delete_model(model_arn): client=boto3.client('rekognition') #Delete a project print('Deleting model:' + model_arn) response=client.delete_project_version(ProjectVersionArn=model_arn) print('Status: ' + response['Status']) print('Done...') def main(): model_arn='model_arn' delete_model(model_arn) if __name__ == "__main__": main()

Java

Change the value of modelArn to the name of the model you want to delete.

//Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

package com.amazonaws.samples;import com.amazonaws.client.builder.AwsClientBuilder;import com.amazonaws.services.rekognition.AmazonRekognition;import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;import com.amazonaws.services.rekognition.model.DeleteProjectVersionRequest;import com.amazonaws.services.rekognition.model.DeleteProjectVersionResult;

public class DeleteModel {

public static void main(String[] args) throws Exception {

String modelArn ="model_arn";

AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); try { DeleteProjectVersionRequest request = new DeleteProjectVersionRequest() .withProjectVersionArn(modelArn); DeleteProjectVersionResult result = rekognitionClient.deleteProjectVersion(request); System.out.println(result.getStatus());

94

Page 100: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDeleting a Model (SDK)

} catch(Exception e) { System.out.println(e.toString()); } System.out.println("Done..."); }}

95

Page 101: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

Analyzing an Image with a TrainedModel

To analyze an image with a trained Amazon Rekognition Custom Labels model, you call theDetectCustomLabels API. The result from DetectCustomLabels is a prediction that the image containsspecific objects, scenes, or concepts.

To call DetectCustomLabels, you specify the following:

• The Amazon Resource Name (ARN) of the Amazon Rekognition Custom Labels model that you want touse.

• The image that you want the model to make a prediction with. You can provide an input image as animage byte array (base64-encoded image bytes), or as an Amazon S3 object. For more information, seeImage.

Custom labels are returned in an array of Custom Label objects. Each custom label represents a singleobject, scene, or concept found in the image. A custom label includes:

• A label for the object, scene, or concept found in the image.

• A bounding box for objects found in the image. The bounding box coordinates show where the objectis located on the source image. The coordinate values are a ratio of the overall image size. For moreinformation, see BoundingBox.

• The confidence that Amazon Rekognition Custom Labels has in the accuracy of the label and boundingbox.

During training a model calculates a threshold value that determines if a prediction for a label is true.By default, DetectCustomLabels doesn't return labels whose confidence value is below the model'scalculated threshold value. To filter labels that are returned, specify a value for MinConfidence thatis higher than the model's calculated threshold. You can get the model's calculated threshold from themodel's training results shown in the Amazon Rekognition Custom Labels console. To get all labels,regardless of confidence, specify a MinConfidence value of 0.

If you're finding the confidence values returned by DetectCustomLabels are too low, considerretraining the model. For more information, see Training an Amazon Rekognition Custom LabelsModel (p. 67). You can restrict the number of custom labels returned from DetectCustomLabelsby specifying the MaxResults input parameter. The results are returned sorted from the highestconfidence to the lowest.

For other examples that call DetectCustomLabels, see Examples (p. 103).

For information about securing DetectCustomLabels, see Securing DetectCustomLabels (p. 109).

To detect custom labels (API)

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess andAmazonS3ReadOnlyAccess permissions. For more information, see Step 2: Create an IAMAdministrator User and Group (p. 6).

96

Page 102: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Train and deploy your model. For more information, see Getting Started with Amazon RekognitionCustom Labels (p. 12).

3. Ensure the IAM user calling DetectCustomLabels has access to the model you used in step 3. Formore information, see Securing DetectCustomLabels (p. 109).

4. Upload an image that contains text to an S3 bucket.

For instructions, see Uploading Objects into Amazon S3 in the Amazon Simple Storage ServiceConsole User Guide.

5. Use the following examples to call the DetectCustomLabels operation.

AWS CLI

This AWS CLI command displays the JSON output for the DetectCustomLabels CLI operation.Change the values of the following input parameters.

• bucket with the name of Amazon S3 bucket that you used in step 4.• image with the name of the input image file you uploaded in step 4.• projectVersionArn with the ARN of the model that you want to use.

aws rekognition detect-custom-labels --project-version-arn "model_arn"\ --image '{"S3Object":{"Bucket":"bucket","Name":"image"}}'\ --min-confidence 70

Python

The following example code displays bounding boxes around custom labels detected in animage. Replace the following values:

• bucket with the name of Amazon S3 bucket that you used in step 4.• image with the name of the input image file you uploaded in step 4.• min_confidence with the minimum confidence (0-100). Objects detected with a confidence

lower than this value are not returned.• model with the ARN of the model that you want use with the input image.

#Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.#PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

import boto3import iofrom PIL import Image, ImageDraw, ExifTags, ImageColor, ImageFont

def show_custom_labels(model,bucket,photo, min_confidence):

client=boto3.client('rekognition')

# Load image from S3 bucket s3_connection = boto3.resource('s3')

s3_object = s3_connection.Object(bucket,photo) s3_response = s3_object.get()

stream = io.BytesIO(s3_response['Body'].read())

97

Page 103: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

image=Image.open(stream) #Call DetectCustomLabels response = client.detect_custom_labels(Image={'S3Object': {'Bucket': bucket, 'Name': photo}}, MinConfidence=min_confidence, ProjectVersionArn=model) imgWidth, imgHeight = image.size draw = ImageDraw.Draw(image)

# calculate and display bounding boxes for each detected custom label print('Detected custom labels for ' + photo) for customLabel in response['CustomLabels']: print('Label ' + str(customLabel['Name'])) print('Confidence ' + str(customLabel['Confidence'])) if 'Geometry' in customLabel: box = customLabel['Geometry']['BoundingBox'] left = imgWidth * box['Left'] top = imgHeight * box['Top'] width = imgWidth * box['Width'] height = imgHeight * box['Height'] fnt = ImageFont.truetype('/Library/Fonts/Arial.ttf', 50) draw.text((left,top), customLabel['Name'], fill='#00d400', font=fnt)

print('Left: ' + '{0:.0f}'.format(left)) print('Top: ' + '{0:.0f}'.format(top)) print('Label Width: ' + "{0:.0f}".format(width)) print('Label Height: ' + "{0:.0f}".format(height))

points = ( (left,top), (left + width, top), (left + width, top + height), (left , top + height), (left, top)) draw.line(points, fill='#00d400', width=5)

image.show()

return len(response['CustomLabels'])

def main():

bucket="bucket" photo="image" model='model_arn' min_confidence=95 label_count=show_custom_labels(model,bucket,photo, min_confidence) print("Custom labels detected: " + str(label_count))

if __name__ == "__main__": main()

Java

The following example code displays bounding boxes around custom labels detected in animage. Replace the following values:

98

Page 104: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

• bucket with the name of Amazon S3 bucket that you used in step 4.

• photo with the name of the input image file you uploaded in step 4.

• minConfidence with the minimum confidence (0-100). Objects detected with a confidencelower than this value are not returned.

• projectVersionArn with the ARN of the model that you want use with the input image.

//Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

package com.amazonaws.samples;

//Import the basic graphics classes.import java.awt.*;import java.awt.image.BufferedImage;import java.util.List;import javax.imageio.ImageIO;import javax.swing.*;

import com.amazonaws.services.rekognition.AmazonRekognition;import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;

import com.amazonaws.services.rekognition.model.BoundingBox;import com.amazonaws.services.rekognition.model.CustomLabel;import com.amazonaws.services.rekognition.model.DetectCustomLabelsRequest;import com.amazonaws.services.rekognition.model.DetectCustomLabelsResult;import com.amazonaws.services.rekognition.model.Image;import com.amazonaws.services.rekognition.model.S3Object;import com.amazonaws.services.s3.AmazonS3;import com.amazonaws.services.s3.AmazonS3ClientBuilder;import com.amazonaws.services.s3.model.S3ObjectInputStream;

// Calls DetectFaces and displays a bounding box around each detected image.public class DisplayCustomLabels extends JPanel {

private static final long serialVersionUID = 1L;

BufferedImage image; static int scale; DetectCustomLabelsResult result;

public DisplayCustomLabels(DetectCustomLabelsResult labelsResult, BufferedImage bufImage) throws Exception { super(); scale = 4; // increase to shrink image size.

result = labelsResult; image = bufImage;

} // Draws the bounding box around the detected faces. public void paintComponent(Graphics g) { float left = 0; float top = 0; int height = image.getHeight(this); int width = image.getWidth(this);

Graphics2D g2d = (Graphics2D) g; // Create a Java2D version of g.

99

Page 105: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

// Draw the image. g2d.drawImage(image, 0, 0, width / scale, height / scale, this); g2d.setColor(new Color(0, 212, 0));

// Iterate through faces and display bounding boxes. List<CustomLabel> customLabels = result.getCustomLabels(); for (CustomLabel customLabel : customLabels) { if (customLabel.getGeometry()!=null) { BoundingBox box = customLabel.getGeometry().getBoundingBox(); left = width * box.getLeft(); top = height * box.getTop(); g2d.drawString(customLabel.getName(), left/scale, top/scale); g2d.drawRect(Math.round(left / scale), Math.round(top / scale), Math.round((width * box.getWidth()) / scale), Math.round((height * box.getHeight())) / scale); } } }

public static void main(String arg[]) throws Exception { String photo = "image"; String bucket = "bucket"; String projectVersionArn ="model_arn"; float minConfidence=90; int height = 0; int width = 0;

// Get the image from an S3 Bucket AmazonS3 s3client = AmazonS3ClientBuilder.defaultClient();

com.amazonaws.services.s3.model.S3Object s3object = s3client.getObject(bucket, photo); S3ObjectInputStream inputStream = s3object.getObjectContent(); BufferedImage image = ImageIO.read(inputStream); DetectCustomLabelsRequest request = new DetectCustomLabelsRequest() .withProjectVersionArn(projectVersionArn) .withImage(new Image().withS3Object(new S3Object().withName(photo).withBucket(bucket))) .withMinConfidence(minConfidence);

width = image.getWidth(); height = image.getHeight();

// Call DetectFaces AmazonRekognition amazonRekognition = AmazonRekognitionClientBuilder.defaultClient(); DetectCustomLabelsResult result = amazonRekognition.detectCustomLabels(request); //Show the bounding box info for each face. List<CustomLabel> customLabels = result.getCustomLabels(); for (CustomLabel customLabel : customLabels) {

if (customLabel.getGeometry()!=null) { BoundingBox box = customLabel.getGeometry().getBoundingBox(); float left = width * box.getLeft(); float top = height * box.getTop(); System.out.println("Custom Label:");

100

Page 106: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDetectCustomLabels Operation Request

System.out.println("Left: " + String.valueOf((int) left)); System.out.println("Top: " + String.valueOf((int) top)); System.out.println("Label Width: " + String.valueOf((int) (width * box.getWidth()))); System.out.println("Label Height: " + String.valueOf((int) (height * box.getHeight()))); System.out.println(); } }

// Create frame and panel. JFrame frame = new JFrame("RotateImage"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); DisplayCustomLabels panel = new DisplayCustomLabels(result, image); panel.setPreferredSize(new Dimension(image.getWidth() / scale, image.getHeight() / scale)); frame.setContentPane(panel); frame.pack(); frame.setVisible(true);

}}

DetectCustomLabels Operation RequestIn the DetectCustomLabels operation, you supply an input image either as a base64-encoded bytearray or as an image stored in an Amazon S3 bucket. The following example JSON request shows theimage loaded from an Amazon S3 bucket.

{ "ProjectVersionArn": "string", "Image":{ "S3Object":{ "Bucket":"string", "Name":"string", "Version":"string" } }, "MinConfidence": 90, "MaxLabels": 10,}

DetectCustomLabels Operation ResponseThe following JSON response from the DetectCustomLabels operation shows the custom labels thatwere detected in the following image.

{ "CustomLabels": [ { "Name": "MyLogo", "Confidence": 77.7729721069336, "Geometry": { "BoundingBox": { "Width": 0.198987677693367, "Height": 0.31296101212501526,

101

Page 107: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDetectCustomLabels Operation Response

"Left": 0.07924537360668182, "Top": 0.4037395715713501 } } } ]}

102

Page 108: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideModel Feedback Solution

ExamplesThis section contains information about examples that you can use with Amazon Rekognition CustomLabels.

Example Description

Model Feedback Solution (p. 103) Shows how to improve a model using humanverification to create a new training dataset.

Custom Labels Demonstration (p. 103) Demonstration of a user interface that displaysthe results of a call to DetectCustomLabels.

Video Analysis (p. 104) Shows how you can use DetectCustomLabelswith frames extracted from a video.

Creating an AWS Lambda Function (p. 105) Shows how you can use DetectCustomLabelswith a Lambda function.

Model Feedback SolutionThe Model Feedback solution enables you to give feedback on your model's predictions and makeimprovements by using human verification. Depending on the use case, you can be successful with atraining dataset that has only a few images. A larger annotated training set might be required to enableyou to build a more accurate model. The Model Feedback solution allows you to create larger datasetthrough model assistance.

To install and configure the Model Feedback solution, see Model Feedback Solution.

The workflow for continuous model improvement is as follows:

1. Train the first version of your model (possibly with a small training dataset).

2. Provide an unannotated dataset for the Model Feedback solution.

3. The Model Feedback solution uses the current model. It starts human verification jobs to annotate anew dataset.

4. Based on human feedback, the Model Feedback solution generates a manifest file that you use tocreate a new model.

Custom Labels DemonstrationThe Amazon Rekognition Custom Labels Demonstration shows a user interface that analyzes imagesfrom your local computer by using the DetectCustomLabels API.

The application shows you information about the Amazon Rekognition Custom Labels models inyour account. After you select a running model, you can analyze an image from your local computer.If necessary, the application allows you to start a model. You can also stop a running model. The

103

Page 109: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideVideo Analysis

application shows integration with other AWS Services such as Amazon Cognito, Amazon S3, andAmazon CloudFront.

For more information, see Amazon Rekognition Custom Labels Demo.

Video AnalysisThe following example shows how you can use DetectCustomLabels with frames extracted from avideo. The code has been tested with video files in mov and mp4 format.

Using DetectCustomLabels with captured frames

1. If you haven't already:

a. Create or update an IAM user with AmazonRekognitionFullAccess andAmazonS3ReadOnlyAccess permissions. For more information, see Step 2: Create an IAMAdministrator User and Group (p. 6).

b. Install and configure the AWS CLI and the AWS SDKs. For more information, see Step 2: Set Upthe AWS CLI and AWS SDKs (p. 7).

2. Use the following example code. Change the value of videoFile to the name of a video file.Change the value of projectVersionArn to the Amazon Resource Name (ARN) of your AmazonRekognition Custom Labels model.

#Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.#PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

import jsonimport boto3import cv2import mathimport io

def analyzeVideo(): videoFile = "video file" projectVersionArn = "project arn"

rekognition = boto3.client('rekognition') customLabels = [] cap = cv2.VideoCapture(videoFile) frameRate = cap.get(5) #frame rate while(cap.isOpened()): frameId = cap.get(1) #current frame number print("Processing frame id: {}".format(frameId)) ret, frame = cap.read() if (ret != True): break if (frameId % math.floor(frameRate) == 0): hasFrame, imageBytes = cv2.imencode(".jpg", frame)

if(hasFrame): response = rekognition.detect_custom_labels( Image={ 'Bytes': imageBytes.tobytes(), }, ProjectVersionArn = projectVersionArn ) for elabel in response["CustomLabels"]: elabel["Timestamp"] = (frameId/frameRate)*1000

104

Page 110: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating an AWS Lambda Function

customLabels.append(elabel) print(customLabels)

with open(videoFile + ".json", "w") as f: f.write(json.dumps(customLabels))

cap.release()

analyzeVideo()

Creating an AWS Lambda FunctionYou can call Amazon Rekognition Custom Labels API operations from within an AWS Lambdafunction. The following instructions show how to create a Lambda function in Python that callsDetectCustomLabels. It returns a list of custom label objects. To run this example, you need anAmazon S3 bucket that contains an image in PNG or JPEG format.

Step 1: Create an AWS Lambda deployment package

1. Open a command window.2. Enter the following commands to create a deployment package with the most recent version of the

AWS SDK.

pip install boto3 --target python/.zip boto3-layer.zip -r python/

Step 2: Create an AWS Lambda function (console)

1. Sign in to the AWS Management Console and open the AWS Lambda console at https://console.aws.amazon.com/lambda/.

2. Choose Create function. For more information, see Create a Lambda Function with the Console.3. Choose the following options.

• Choose Author from scratch.• Enter a value for Function name.• For Runtime choose Python 3.7 or Python 3.6.• For Choose or create an execution role, choose Create a new role with basic Lambda

permissions.4. Choose Create function to create the AWS Lambda function.5. Open the IAM console at https://console.aws.amazon.com/iam/.6. From the navigation pane, choose Roles.7. From the resources list, choose the IAM role that AWS Lambda created for you. The role name is

prepended with the name of your Lambda function.8. In the Permissions tab, choose Attach policies.9. Add the AmazonRekognitionFullAccess and AmazonS3ReadOnlyAccess policies.10. Choose Attach Policy.

Step 3: Create and add a layer (console)

1. Open the AWS Lambda console at https://console.aws.amazon.com/lambda/.

105

Page 111: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating an AWS Lambda Function

2. In the navigation pane, choose Layers.

3. Choose Create layer.

4. Enter values for Name and Description.

5. For Code entry type, choose Upload .zip file and choose Upload.

6. In the dialog box, choose the zip file (boto3-layer.zip) that you created in Step 1: Create an AWSLambda deployment package (p. 105).

7. For compatible runtimes, choose the runtime that you chose in Step 2: Create an AWS Lambdafunction (console) (p. 105).

8. Choose Create to create the layer.

9. Choose the navigation pane menu icon.

10. In the navigation pane, choose Functions.

11. In the resources list, choose the function that you created in Step 2: Create an AWS Lambda function(console) (p. 105).

12. In the Designer section of the Configuration tab, choose Layers (under your Lambda functionname).

13. In the Layers section, choose Add a layer.

14. Choose Select from list of runtime compatible layers.

15. In Compatible layers, choose the Name and Version of the layer name and version that you createdin step 3.

16. Choose Add.

Step 4: Add Python code (console)

1. In Designer, choose your function name.

2. In the function code editor, add the following to the file lambda_function.py. Change the valuesof bucket and photo to your bucket and image. Change the value of model_arn to the AmazonResource Name (ARN) of your Amazon Rekognition Custom Labels model.

import jsonimport boto3

def lambda_handler(event, context):

bucket="bucket" photo="image" model_arn='model arn' client=boto3.client('rekognition')

#process using S3 object response = client.detect_custom_labels(Image={'S3Object': {'Bucket': bucket, 'Name': photo}}, MinConfidence=30, ProjectVersionArn=model_arn)

#Get the custom labels labels=response['CustomLabels'] return { 'statusCode': 200, 'body': json.dumps(labels) }

3. Choose Save to save your Lambda function.

106

Page 112: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideCreating an AWS Lambda Function

Step 5: Test your Lambda function (console)

1. Choose Test.2. Enter a value for Event name.3. Choose Create.4. Choose Test. The Lambda function is invoked. The output is displayed in the Execution results pane

of the code editor. The output is a list of custom labels.

If the AWS Lambda function returns a timeout error, a call to an Amazon Rekognition Custom Labels APIoperation might be the cause. For information about extending the timeout period for an AWS Lambdafunction, see AWS Lambda Function Configuration.

For information about invoking a Lambda function from your code, see Invoking AWS Lambda Functions.

107

Page 113: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideSecuring Amazon Rekognition Custom Labels Projects

SecurityYou can secure the management of your models and the DetectCustomLabels API that yourcustomers use to detect custom labels.

For more information about securing Amazon Rekognition, see Amazon Rekognition Security.

Securing Amazon Rekognition Custom LabelsProjects

You can secure your Amazon Rekognition Custom Labels projects by specifying the resource-levelpermissions that are specified in identity-based policies. For more information, see Identity-BasedPolicies and Resource-Based Policies.

The Amazon Rekognition Custom Labels resources that you can secure are:

Resource Amazon Resource Name Format

Project arn:aws:rekognition:*:*:project/project_name/datetime

Model arn:aws:rekognition:*:*:project/project_name/version/name/datetime

The following example policy shows how to give an identity permission to:

• Describe all projects.• Create, start, stop, and use a specific model for inference.• Create a project. Create and describe a specific model.• Deny the creation of a specific project.

{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllResources", "Effect": "Allow", "Action": "rekognition:DescribeProjects", "Resource": "*" }, { "Sid": "SpecificProjectVersion", "Effect": "Allow", "Action": [ "rekognition:StopProjectVersion", "rekognition:StartProjectVersion", "rekognition:DetectCustomLabels", "rekognition:CreateProjectVersion" ], "Resource": "arn:aws:rekognition:*:*:project/MyProject/version/MyVersion/*"

108

Page 114: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideSecuring DetectCustomLabels

}, { "Sid": "SpecificProject", "Effect": "Allow", "Action": [ "rekognition:CreateProject", "rekognition:DescribeProjectVersions", "rekognition:CreateProjectVersion" ], "Resource": "arn:aws:rekognition:*:*:project/MyProject/*" }, { "Sid": "ExplicitDenyCreateProject", "Effect": "Deny", "Action": [ "rekognition:CreateProject" ], "Resource": ["arn:aws:rekognition:*:*:project/SampleProject/*"] } ]}

Securing DetectCustomLabelsThe identity used to detect custom labels might be different from the identity that manages AmazonRekognition Custom Labels models.

You can secure access an identity’s access to DetectCustomLabels by applying a policy to the identity.The following example restricts access to DetectCustomLabels only and to a specific model. Theidentity doesn’t have access to any of the other Amazon Rekognition operations.

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "rekognition:DetectCustomLabels"

], "Resource": "arn:aws:rekognition:*:*:project/MyProject/version/MyVersion/*" } ]}

109

Page 115: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining

Limits in Amazon RekognitionCustom Labels

The following is a list of limits in Amazon Rekognition Custom Labels. For information about limits youcan change, see AWS Service Limits. To change a limit, see Create Case.

Training• Supported file formats are PNG and JPEG image formats.• Maximum number of training datasets in a version of a model is 1.• Maximum dataset manifest file size is 1 GB.• Minimum number of unique labels per Objects, Scenes, and Concepts (classification) dataset is 2.• Minimum number of unique labels per Object Location (detection) dataset is 1.• Maximum number of unique labels per manifest is 250.• Minimum number of images per label is 1.• Maximum number of images per Object Location (detection) dataset is 250,000.• Maximum number of images per Objects, Scenes, and Concepts (classification) dataset is 500,000. The

default is 250,000. To request an increase, see Create Case.• Maximum number of labels per image is 50.• Minimum number of bounding boxes in an image is 0.• Maximum number of bounding boxes in an image is 50.• Minimum image dimension of image file in an Amazon S3 bucket is 64 pixels x 64 pixels.• Maximum image dimension of image file in an Amazon S3 bucket is 4096 pixels x 4096 pixels.• Maximum file size for an image in an Amazon S3 bucket is 15 MB.

Testing• Maximum number of testing datasets in a version of a model is 1.• Maximum dataset manifest file size is 1 GB.• Minimum number of unique labels per Objects, Scenes, and Concepts (classification) dataset is 2.• Minimum number of unique labels per Object Location (detection) dataset is 1.• Maximum number of unique labels per dataset is 250.• Minimum number of images per label is 0.• Maximum number of images per label is 1000.• Maximum number of images per Object Location (detection) dataset is 250,000.• Maximum number of images per Objects, Scenes, and Concepts (classification) dataset is 500,000. The

default is 250,000. To request an increase, see Create Case.• Minimum number of labels per image per manifest is 0.• Maximum number of labels per image per manifest is 50.• Minimum number of bounding boxes in an image per manifest is 0.• Maximum number of bounding boxes in an image per manifest is 50.

110

Page 116: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideDetection

• Minimum image dimension of an image file in an Amazon S3 bucket is 64 pixels x 64 pixels.• Maximum image dimension of an image file in an Amazon S3 bucket is 4096 pixels x 4096 pixels.• Maximum file size for an image in an Amazon S3 bucket is 15 MB.• Supported file formats are PNG and JPEG image formats.

Detection• Maximum size of images passed as raw bytes is 4 MB.• Maximum file size for an image in an Amazon S3 bucket is 15 MB.• Minimum image dimension of an image file in an Amazon S3 bucket is 64 pixels x 64 pixels.• Maximum image dimension of an image file in an Amazon S3 bucket is 4096 pixels x 4096 pixels.• Supported file formats are PNG and JPEG image formats.

111

Page 117: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels GuideTraining Your Model

Amazon Rekognition Custom LabelsAPI Reference

The Amazon Rekognition Custom Labels API is documented as part of the Amazon Rekognition APIreference content. This is a list of the Amazon Rekognition Custom Labels API operations with links tothe appropriate Amazon Rekognition API reference topic. Also, API reference links within this documentgo to the appropriate Amazon Rekognition Developer Guide API reference topic.

Training Your Model• CreateProject — Creates your Amazon Rekognition Custom Labels project which is a logical grouping

of resources (images, Labels, models) and operations (training, evaluation and detection).• CreateProjectVersion — Trains your Amazon Rekognition Custom Labels model using the provided

training and test datasets. Optionally you can autosplit a training dataset to create a test dataset.• DeleteProject — Deletes an Amazon Rekognition Custom Labels project.• DeleteProjectVersion — Deletes an Amazon Rekognition Custom Labels model.• DescribeProjects — Returns a list of all your projects.• DescribeProjectVersions — Returns a list of all the custom labels models within a specific project.

Using Your Model• DetectCustomLabels — Analyzes an image with your custom labels model.• StartProjectVersion — Starts your custom labels model.• StopProjectVersion — Stops your custom labels model.

112

Page 118: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

Document History for AmazonRekognition Custom Labels

The following table describes important changes in each release of the Amazon Rekognition CustomLabels Developer Guide. For notification about updates to this documentation, you can subscribe to anRSS feed.

• Latest documentation update: June 25th, 2020

update-history-change update-history-description update-history-date

Amazon Rekognition CustomLabels now supports singleobject training (p. 113)

To create an AmazonRekognition Custom Labelsmodel that finds the locationof a single object, you cannow create a dataset that onlyrequires one label.

June 25, 2020

Project and model deleteoperations added (p. 113)

You can now delete AmazonRekognition Custom Labelsprojects and models with theconsole and with the API.

April 1, 2020

Added Java examples (p. 113) Added Java examples coveringproject creation, model training,model running, and imageanalysis.

December 13, 2019

New feature and guide (p. 113) This is the initial release of theAmazon Rekognition CustomLabels feature and the AmazonRekognition Custom LabelsDeveloper Guide.

December 3, 2019

113

Page 119: Rekognition - Custom Labels Guide...a model, and provides model performance metrics. You can then use your custom model through the Amazon Rekognition Custom Labels API and integrate

Rekognition Custom Labels Guide

AWS glossaryFor the latest AWS terminology, see the AWS glossary in the AWS General Reference.

114