Turning big data into sound

A collaboration between two professors – one of music and one of engineering – at Virginia Tech resulted in the creation of a new platform for data analysis that makes it possible to understand data better by turning it into sound. This is a pioneering approach to studying spatially distributed data which instead of placing information into a visual context to show patterns or correlations – meaning, data visualization – uses  an aural environment to leverage the natural affordances of the space and the user’s location within the sound field. Funded by the National Science Foundation, the work combines elements…

This story continues at The Next Web

Powered by WPeMatico