Projects per year
Abstract
This deliverable presents the methodologies for the evaluation of the piloted use cases; for technical evaluation and the assessments of user acceptance, quality of life and the business impact.
The FESTA methodology is applied and enhanced for evaluating the added value of the Internet-of-Things (IoT) to improve Cooperative and Automated Driving (AD). The main research question to be evaluated is defined as “What is the added value of IoT for AD?” This central question is refined for all four evaluation perspectives in more detailed research questions, hypotheses and key performance indicators, measurements and log data from the pilots, and in evaluation methods. The methodologies provide the starting point for implementation and execution int he evaluation tasks in the next preparation and piloting phases.
The evaluation methodologies are tailored for the scale and scope of the pilot sites and implementations of the use cases. The common research focus in the evaluation methodologies on the concepts and criteria that are most common among pilot sites and use cases maximises the synergy and coherence between the evaluation tasks. Potential improvements of IoT to accelerate, enhance or enable automated driving functions and services will be evaluated and assessed collaboratively from all four perspectives. The methodologies will be extended for additional use case or pilot site specific evaluation criteria during the coming phases.
This deliverable provides guidelines, requests and requirements for pilot test scenarios and data provisioning that will be needed as input for evaluation. This is input for the specification and data management of the pilots.
The FESTA methodology is applied and enhanced for evaluating the added value of the Internet-of-Things (IoT) to improve Cooperative and Automated Driving (AD). The main research question to be evaluated is defined as “What is the added value of IoT for AD?” This central question is refined for all four evaluation perspectives in more detailed research questions, hypotheses and key performance indicators, measurements and log data from the pilots, and in evaluation methods. The methodologies provide the starting point for implementation and execution int he evaluation tasks in the next preparation and piloting phases.
The evaluation methodologies are tailored for the scale and scope of the pilot sites and implementations of the use cases. The common research focus in the evaluation methodologies on the concepts and criteria that are most common among pilot sites and use cases maximises the synergy and coherence between the evaluation tasks. Potential improvements of IoT to accelerate, enhance or enable automated driving functions and services will be evaluated and assessed collaboratively from all four perspectives. The methodologies will be extended for additional use case or pilot site specific evaluation criteria during the coming phases.
This deliverable provides guidelines, requests and requirements for pilot test scenarios and data provisioning that will be needed as input for evaluation. This is input for the specification and data management of the pilots.
Original language | English |
---|---|
Number of pages | 90 |
Publication status | Published - 2018 |
MoE publication type | Not Eligible |
Fingerprint
Dive into the research topics of 'Methodology for Evaluation: AUTOPILOT Deliverable D.4.1'. Together they form a unique fingerprint.Projects
- 1 Finished
-
AUTOPILOT: AUTOmated driving Progressed by Internet Of Things
Scholliers, J. (Manager)
1/01/17 → 29/02/20
Project: EU project