Methodology for Evaluation: AUTOPILOT Deliverable D.4.1

Bart Netten (Corresponding author), Elina Aittoniemi, Yvonne Barnard, Maija Federley, Lila Gaitanidou, Georgios Karagiannis, Viktoriya Kolarova, Olivier Lenz, Jordi Pont Rañé, Benedikt van den Boom, Ralf Willenbrock

Research output: Book/ReportReport

Abstract

This deliverable presents the methodologies for the evaluation of the piloted use cases; for technical evaluation and the assessments of user acceptance, quality of life and the business impact.

The FESTA methodology is applied and enhanced for evaluating the added value of the Internet-of-Things (IoT) to improve Cooperative and Automated Driving (AD). The main research question to be evaluated is defined as “What is the added value of IoT for AD?” This central question is refined for all four evaluation perspectives in more detailed research questions, hypotheses and key performance indicators, measurements and log data from the pilots, and in evaluation methods. The methodologies provide the starting point for implementation and execution int he evaluation tasks in the next preparation and piloting phases.

The evaluation methodologies are tailored for the scale and scope of the pilot sites and implementations of the use cases. The common research focus in the evaluation methodologies on the concepts and criteria that are most common among pilot sites and use cases maximises the synergy and coherence between the evaluation tasks. Potential improvements of IoT to accelerate, enhance or enable automated driving functions and services will be evaluated and assessed collaboratively from all four perspectives. The methodologies will be extended for additional use case or pilot site specific evaluation criteria during the coming phases.

This deliverable provides guidelines, requests and requirements for pilot test scenarios and data provisioning that will be needed as input for evaluation. This is input for the specification and data management of the pilots.
Original languageEnglish
Number of pages90
Publication statusPublished - 2018
MoE publication typeNot Eligible

Fingerprint

Information management
Specifications
Internet of things
Industry

Cite this

Netten, B., Aittoniemi, E., Barnard, Y., Federley, M., Gaitanidou, L., Karagiannis, G., ... Willenbrock, R. (2018). Methodology for Evaluation: AUTOPILOT Deliverable D.4.1.
Netten, Bart ; Aittoniemi, Elina ; Barnard, Yvonne ; Federley, Maija ; Gaitanidou, Lila ; Karagiannis, Georgios ; Kolarova, Viktoriya ; Lenz, Olivier ; Pont Rañé, Jordi ; van den Boom, Benedikt ; Willenbrock, Ralf. / Methodology for Evaluation : AUTOPILOT Deliverable D.4.1. 2018. 90 p.
@book{8797f8410d0046f48288e1a1f9c6d779,
title = "Methodology for Evaluation: AUTOPILOT Deliverable D.4.1",
abstract = "This deliverable presents the methodologies for the evaluation of the piloted use cases; for technical evaluation and the assessments of user acceptance, quality of life and the business impact.The FESTA methodology is applied and enhanced for evaluating the added value of the Internet-of-Things (IoT) to improve Cooperative and Automated Driving (AD). The main research question to be evaluated is defined as “What is the added value of IoT for AD?” This central question is refined for all four evaluation perspectives in more detailed research questions, hypotheses and key performance indicators, measurements and log data from the pilots, and in evaluation methods. The methodologies provide the starting point for implementation and execution int he evaluation tasks in the next preparation and piloting phases.The evaluation methodologies are tailored for the scale and scope of the pilot sites and implementations of the use cases. The common research focus in the evaluation methodologies on the concepts and criteria that are most common among pilot sites and use cases maximises the synergy and coherence between the evaluation tasks. Potential improvements of IoT to accelerate, enhance or enable automated driving functions and services will be evaluated and assessed collaboratively from all four perspectives. The methodologies will be extended for additional use case or pilot site specific evaluation criteria during the coming phases.This deliverable provides guidelines, requests and requirements for pilot test scenarios and data provisioning that will be needed as input for evaluation. This is input for the specification and data management of the pilots.",
author = "Bart Netten and Elina Aittoniemi and Yvonne Barnard and Maija Federley and Lila Gaitanidou and Georgios Karagiannis and Viktoriya Kolarova and Olivier Lenz and {Pont Ra{\~n}{\'e}}, Jordi and {van den Boom}, Benedikt and Ralf Willenbrock",
year = "2018",
language = "English",

}

Netten, B, Aittoniemi, E, Barnard, Y, Federley, M, Gaitanidou, L, Karagiannis, G, Kolarova, V, Lenz, O, Pont Rañé, J, van den Boom, B & Willenbrock, R 2018, Methodology for Evaluation: AUTOPILOT Deliverable D.4.1.

Methodology for Evaluation : AUTOPILOT Deliverable D.4.1. / Netten, Bart (Corresponding author); Aittoniemi, Elina; Barnard, Yvonne; Federley, Maija; Gaitanidou, Lila; Karagiannis, Georgios; Kolarova, Viktoriya; Lenz, Olivier; Pont Rañé, Jordi; van den Boom, Benedikt; Willenbrock, Ralf.

2018. 90 p.

Research output: Book/ReportReport

TY - BOOK

T1 - Methodology for Evaluation

T2 - AUTOPILOT Deliverable D.4.1

AU - Netten, Bart

AU - Aittoniemi, Elina

AU - Barnard, Yvonne

AU - Federley, Maija

AU - Gaitanidou, Lila

AU - Karagiannis, Georgios

AU - Kolarova, Viktoriya

AU - Lenz, Olivier

AU - Pont Rañé, Jordi

AU - van den Boom, Benedikt

AU - Willenbrock, Ralf

PY - 2018

Y1 - 2018

N2 - This deliverable presents the methodologies for the evaluation of the piloted use cases; for technical evaluation and the assessments of user acceptance, quality of life and the business impact.The FESTA methodology is applied and enhanced for evaluating the added value of the Internet-of-Things (IoT) to improve Cooperative and Automated Driving (AD). The main research question to be evaluated is defined as “What is the added value of IoT for AD?” This central question is refined for all four evaluation perspectives in more detailed research questions, hypotheses and key performance indicators, measurements and log data from the pilots, and in evaluation methods. The methodologies provide the starting point for implementation and execution int he evaluation tasks in the next preparation and piloting phases.The evaluation methodologies are tailored for the scale and scope of the pilot sites and implementations of the use cases. The common research focus in the evaluation methodologies on the concepts and criteria that are most common among pilot sites and use cases maximises the synergy and coherence between the evaluation tasks. Potential improvements of IoT to accelerate, enhance or enable automated driving functions and services will be evaluated and assessed collaboratively from all four perspectives. The methodologies will be extended for additional use case or pilot site specific evaluation criteria during the coming phases.This deliverable provides guidelines, requests and requirements for pilot test scenarios and data provisioning that will be needed as input for evaluation. This is input for the specification and data management of the pilots.

AB - This deliverable presents the methodologies for the evaluation of the piloted use cases; for technical evaluation and the assessments of user acceptance, quality of life and the business impact.The FESTA methodology is applied and enhanced for evaluating the added value of the Internet-of-Things (IoT) to improve Cooperative and Automated Driving (AD). The main research question to be evaluated is defined as “What is the added value of IoT for AD?” This central question is refined for all four evaluation perspectives in more detailed research questions, hypotheses and key performance indicators, measurements and log data from the pilots, and in evaluation methods. The methodologies provide the starting point for implementation and execution int he evaluation tasks in the next preparation and piloting phases.The evaluation methodologies are tailored for the scale and scope of the pilot sites and implementations of the use cases. The common research focus in the evaluation methodologies on the concepts and criteria that are most common among pilot sites and use cases maximises the synergy and coherence between the evaluation tasks. Potential improvements of IoT to accelerate, enhance or enable automated driving functions and services will be evaluated and assessed collaboratively from all four perspectives. The methodologies will be extended for additional use case or pilot site specific evaluation criteria during the coming phases.This deliverable provides guidelines, requests and requirements for pilot test scenarios and data provisioning that will be needed as input for evaluation. This is input for the specification and data management of the pilots.

M3 - Report

BT - Methodology for Evaluation

ER -

Netten B, Aittoniemi E, Barnard Y, Federley M, Gaitanidou L, Karagiannis G et al. Methodology for Evaluation: AUTOPILOT Deliverable D.4.1. 2018. 90 p.