Earn money by answering our surveys

Register now!
IntoTheMinds consulting blog
Advice in Data & IT

A history of (Big) Data and modelling

Earn up to 50€ by participating in one of our paid market research.

Register now!

At the latest EMAC 2016 conference, Michel Wedel gave a brilliant demonstration of how data collection methods have evolved over the last 100 years and how modelling changed. His graphical representation of (Big) Data history was actually so brilliant that I thought worthwhile to reproduce it here.

It all started with simple surveys and regression methods. Over a 100-year period, volumes kept increasing and modelling methods have coped with this evolution.

Yet, with research at Microsoft showing that prediction gets better when volume increases, one may wonder whether a need still exists for more complex models. And in general, are models produced by academics still relevant given that academics only have access to small samples (compared to those of the industry) ?

I don’t have an answer to this question. I can only stress that the gap is growing between academics and practitioners.

Tags:

Author: Pierre-Nicolas Schwab

Pierre-Nicolas est Docteur en Marketing et dirige l'agence d'études de marché IntoTheMinds. Ses domaines de prédilection sont le BigData l'e-commerce, le commerce de proximité, l'HoReCa et la logistique. Il est également chercheur en marketing à l'Université Libre de Bruxelles et sert de coach et formateur à plusieurs organisations et institutions publiques. Il peut être contacté par email, Linkedin ou par téléphone (+32 486 42 79 42)

Share This Post On

Submit a Comment

Your email address will not be published. Required fields are marked *