Early Research Program: Making Sense of Big Data

Big Data is an all-encompassing term that stands for any collection data set so large and complex that it becomes too difficult to process using on-hand data management tools or traditional data processing applications. The topics of this program are 1) creating value, 2) extracting meaning and 3) distributed data structures. We will focus on two fields where we expect Big Data methods to be a game changer: Logistics and Personalized health.

Three roles in Big Data eco system (from European Big Data Value cPPP)

Big Data is an all-encompassing term that stands for any collection data set so large and complex that it becomes too difficult to process using on-hand data management tools or traditional data processing applications. Although Big Data is in this way formulated as a problem, it holds in fact an enormous potential in various fields, ranging from health, food security, climate and resource efficiency to energy, intelligent transport systems and smart cities; an opportunity which we cannot afford to miss.
Although the topic is clearly general and relevant for a wide range of domains, we will focus on two fields where we expect Big Data methods to be a game changer: Leagile logistics and personalized health.
Three main roles can be identified in the big data ecosystem: data provider, data processor and service provider and service consumer, see figure. We adopt this picture to define three main technology lines: creating value, extracting meaning and distributed data infrastructures.

Activities

Big Data innovations demand new types of collaboration. In Creating Value we investigate the value of new big data applications (and the individual elements of the value creation process of data), the context in which this value is created and the broader potential economic and societal impacts.

In Extracting meaning we will develop tools for extracting meaning from big data sets. The challenge is to make the tools work on heterogeneous data sources that have differences in data quality (consistent, complete vs missing values, correct, volatile), timescale and volume, language, and perspective (images, video, 3D, etc.) in time.

In distributed data infrastructures we develop generic infrastructural technologies to enable secure, reliable and cost-efficient access, transport and storage of the unprecedentedly large amounts of (mostly) strongly distributed data, and to make it available for further analysis, processing and enrichment.

Werkpakketten van Sense Making of Big Data

Use cases

Leagile Logistics

The logistics use case aims to increase resilience of goods flows between the Rotterdam mainport and its hinterland making it lean and agile with real time data with the objective to handle the foreseen transhipment growth of the Rotterdam port. Decisions of individual stakeholders in these goods flows can be improved leading to more efficient capacity utilisation and reducing delays.

Personalized health

Health and its rising costs are an increasingly important topic for the public and private sectors. In order to transform the health sector into a more financially sustainable form, fundamental changes need to be made. Important elements of this transformation are prevention, participation, prediction and personalization. Lifestyle coaching (physical activity, personalized food, stress prevention) is an area where TNO has excellent domain knowledge, but also is well connected to relevant industries (e.g. food industry and ICT sector).

Early Research Programme

We use anonymous cookies to enhance the use of our site.