EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS...

24
EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN

Transcript of EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS...

Page 1: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS

18/10/2019

GUILLAUME AVRIN

Page 2: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

Our positioning in AI

2

Page 3: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS

3

As a state-owned laboratory:

It is independent of any private interest

(reinforced notion of trusted third party)

The sincerity of its evaluations is guaranteed

LNE, state-owned trusted third party for the evaluation of AI and robots

More than 10 years of experience on AI evaluation and more than 900 systems evaluated by a

permanent team of doctors and engineers specialized in evaluation.

Page 4: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

4

Organize evaluation

campaigns

Develop evaluation methods

and metrics

As a public research laboratory

1. Assistance to public

bodies

Objective: measure technological progress

and estimate investment impact to optimize

public funding of research.

Development

Acceptance testing

Comparison (challenges)

To developers

2. Technical assistance

to companies

To end users

Objective: provide our partners with reliable

benchmarks and results to enable a pragmatic

and well-reasoned decision-making.

3. Participation to

standardization

activities

AFNOR AI, ISO AI, UNM 81,

etc.

Objective: establish benchmarks to simplify

contractual relations and encourage

innovation while ensuring the protection of

citizens and consumers.

Page 5: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

The evaluation, step by step

5

Page 6: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

AI EVALUATION PROCESS

6

Testing dataset

Reference Hypothesis

Human System

Performance

estimation

Comparison

metrics

Page 7: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

7

Environment

SYSTEM

SE

NS

OR

S

INTERPRETATION DECISION

EF

FE

CT

OR

S

Action

Environment Action

SYSTEM

Gray-box evaluation

Black-box evaluation

Detection

evaluation

Interpretation

evaluation

Decision making

evaluation

Action evaluation

Overall system

evaluation

AI EVALUATION PROCESS

Page 8: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

Task definition Provision of testing

datasets and environments

Retrieval of system outputs

Comparison of system outputs and

references

Scoring and error

analysis

ILLUSTRATION OF THE STEPS OF AN EVALUATION

8

Page 9: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

EVALUATION : AN EXPERTISE IN ITS OWN RIGHT

9

4. Data 5. References

(ground truth)

2. Protocols,

metrics

Data selection:

relevance,

representativeness,

quality

Identification of

influencing factors

Definition of

evaluation criteria

and metrics

Development of

annotation systems

Data annotation or

supervision of data

annotation

Development of

tools for data

management and

sharing (server)

Development of

tools for data

collection

Qualification of

annotations and

annotators

Interpretation of

results

3. Testing

environments

Development of

adapted testing

environments

Control and

measure of

influencing factors

Ensure

reproducibility of

experiments

1. Testing

scenarios

Identification of

technoscientific

barriers to be

removed

Definition of

participation terms

and conditions

Definition of the

evaluation tasks

Evaluation plan Evaluation references

Page 10: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

LNE EVALUATION TOOLS

10

4. Data 5. References

(ground truth)

3. Testing

environments

2. Protocols,

metrics

1. Testing

scenarios

Open-source Matics software suite to

explore annotated data and

evaluation results:

• Translation

• Diarization

• Transcription

• Speaker verification

And soon:

• OCR

• Image recognition

DIANNE software:

• annotation and automatic pre-

annotation of crops and weed

• will be extended to other recognition

tasks

Evaluation of robots:

• laboratory testing (in LNE

climatic chambers)

• virtual testing (simulation-based)

Page 11: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

OUR TOOLS

11

Matics software suite – Data visualisation and evaluation

Datomatic – Dataset preparation and visualisation

Evalomatic – Evaluation and visualisation

Evalomatic

Evaluation scores

(transcription task)

Datomatic

Data visualisation

(transcription task)

Page 12: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

OUR TOOLS

12

Matics software suite

Evalomatic

Graphical visualisation

Page 13: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

OUR TOOLS

13

DIANNE : Edge detection, identification and annotation for evaluation

Page 14: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

14

Prepa.

Evaluation protocol

validationDry-run

First

appraisalEval. 1

Measure of improvementsEval. 2

Definition of the evaluation plans and test data or facilities

Evolution of error rates – person recognition(REPERE campaign)

0

10

20

30

40

50

60

70

80

90

100

2013 2014

CE

R (

%)

Participant 1

Participant 3

Participant 4

Evolution of error rates – optical character recognition(MAURDOR campaign)

0

10

20

30

40

50

60

70

80

90

2012 2013

EG

ER

(%

)

Participant 1 supervised

Participant 2 supervised

Participant 3 supervised

Participant 1 unsupervised

Participant 2 unsupervised

Participant 3 unsupervised

CHALLENGE ORGANISATION

Page 15: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

For which application areas?

15

Page 16: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

SPEECHTranscription, keyword spotting,

speaker comparison, named entities

recognition, speaker tracking,

translation, etc.

TEXTTopic detection, named entities

recognition, information retrieval,

translation, etc.

IMAGE

Head tracking, optical character

recognition, etc. MULTIMEDIA

Person tracking, document

classification, etc.

Welcome, we are delighted to have you here

ترحيب ، يسعدنا أن نرحب بكم

16

EXPERTISE IN EVALUATION OF INFORMATION PROCESSING SYSTEMS

Challenges (Quaero,

Repere, etc.)

Benchmarking (INC)

Qualification (Allies)

Certification

(Voxcrim)

Page 17: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

EVALUATION OF ROBOTS

HRP2 robot (Franco-Japanese

humanoid robot) evaluated in

climatic chambers at LNESimulation of the

autonomous vehicle

Smart mobility Simulation for autonomous vehicle safety

Agri-food Risk analysis, scientific monitoring and community structuring,

organization of a challenge in agricultural robotics

Service Development of evaluation tools

Public-Private

partnership

Study of the influence of climatic conditions on the performance of

AI systems, assessment of AI and cybersecurity of smart medical

devices

17

Page 18: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

Our orientations

18

Page 19: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

19

Metrology: develop standards and protocols for the evaluation of AI

Evaluation: set up an AI assessment and testing centres

Certification: promote the certification of AI

OUR ORIENTATIONS

Page 20: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

20

Definition of standards: reference testing datasets and environments, metrics, etc.

Definition of evaluation protocols: testing scenarii, evaluation tasks, methods for

calculating the measurement uncertainty, etc.

For performance evaluation

Accuracy, precision, trueness, fidelity, error

rate, sensitivity, specificity, etc.

Robustness, resilience and operating range

Datasets qualification (representativeness)

Other performance requirements

(speed, efficiency, ergonomics, etc.)

To promote acceptability

Explainability, intelligibility,

predictability, readable behaviour

Regulation (transparency,

non-discrimination)

Security (controllable, auditable)

METROLOGY OF AI

Page 21: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

21

EXPLAINABILITY

A tool to facilitate verifications and make them more reliable• Solving the “black box” problem?

• To estimate the operating domain, better identify rare (but critical) phenomena, etc.

Towards the evaluation of explainability• Measuring performance

o Characterise explainability (according to context, requirements, user profile, etc.)

o Define objective metrics

• Development of standards

o Type of information to be extracted, reference values, etc.

Page 22: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

22

What LNE offers:• A unique know-how in the organization of evaluation campaigns for AI systems (design of the

evaluation plan, organization of evaluation meetings, management of the associated events)

o to set up a rigorous metrological approach (repeatable performance measurements, reproducible

experiments, qualified test databases, identified and controlled influence factors, limited biases)

o to maximize the impact of evaluations

• Evaluation tools

o Suite Matics software suite

o annotation tools

o real or simulated test environments, etc.

• A status:

o trusted third party (LNE does not develop AI systems)

o independent evaluator (LNE is public, it is independent of any private interest)

LNE is interested in:• collaborating with other NMI to bring metrology expertise to the field of AI evaluation.

• participating in projects aimed at demonstrating the performance and functionality of AI technologies

• setting up challenges, evaluation campaigns, competitions, especially in AI and robotics

CONCLUSION

Page 23: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

COLLABORATIVE TOPICS WITH LNE

23

Performance evaluation: accuracy, precision, trueness, robustness, resilience

• at the level of the overall system (autonomous cars, surgical robots, etc.),

• at the level of the detection modules (obstacle detection, face recognition, etc.),

• at the level of decision-making modules (hazard management, etc.),

• At the level of action modules (autonomous navigation, etc.).

Explainability evaluation: how to relate the decision taken to the known data and

characteristics of the situation?

Human-machine interaction evaluation: how to measure the quality of an interaction (during

a close cooperation between an intelligent personnel assistant and a pilot, for example).

Page 24: EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS · EVALUATION OF ARTIFICIAL INTELLIGENCE SYSTEMS 18/10/2019 GUILLAUME AVRIN. ... innovation while ensuring the protection of citizens

Thank you for your attention

24