© 2024
Virginia's Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Music of the Spheres; Big Data Meets Big Sound

Most of us learn by using our eyes. Whether it’s reading or poring over formulas, graphs and charts, it’s a mostly visual experience.  Now, professors at Virginia Tech are exploring a new way to add another dimension to teaching and learning by transforming data into sound.     

Ivica Ico Bukvic teaches composition and multimedia. “With our eyes, we only see what’s ahead of us. With our ears, we hear what is all around us.”

Bukvic is using the Cube, an experimental space inside The Moss Arts Center at Virginia Tech, with its one hundred twenty-nine, floor to ceiling speakers, to design a whole new way to convey complicated information by turning it into sound.  

Remember that movie, Minority Report, that starred Tom Cruise, where he used a transparent screen to move data around? “In Minority Report the impressive thing was, the person moving all these squares around," Bukvic noted.  "Well in this space, you essentially have a huge screen, except it’s aural.  You’re immersed in sound and users will be very much in the same position (as in the movie.) They’re going to be moving sounds around looking for those correlations in sound that they can actually detect through their ears.”

Bukvic believes sound takes something of a short cut to the brain. The idea is that the human ear has a superior ability to recognize tiny changes in ways the eye might miss. 

Here in the Cube, PhD student, Woohun Joo, a graphic, interaction and multi-media designer, is waving his hands around like he’s conducting an invisible orchestra.

Disha Sardana, who’s working on her PhD in electromagnetics and music technology, explains how Joo’s mad gesticulations translate into the sounds we’re hearing.  “It’s just a (motion capture) glove you wear, you don’t even have put in effort to learn something.  A user can come in and learn it in like, two minutes and you can do all the stuff you want.”

And since there’s more data than ever before to make sense of, the sky is the limit for the how this might be used. 

In fact, the first experiment will simulate the earth’s upper atmosphere.  Bukvic explains the process he calls ‘data sonification,’ which is basically, representing information, not in print, but in sound. “For instance, there might be a situation where we are looking at Geo-space data where something could happen in front of us but at the same time, or shortly thereafter something might be happening above us or behind us. Hearing that might help us uncover those new patterns.”

Bukvic is mostly a sound guy. He hatched the idea for this experiment with Greg Earle, electrical and computer engineering guy.  Here’s how Earle explains why audio is such a powerful teaching tool.

“The example I like to use is, imagine you have your iPad and it’s playing a playlist. Well, after you’ve listened to that play list 3 or 4 times, your brain tells you what to anticipate. You know (what) the next song (will be). It took no effort for you to memorize that. It came into your ears and you heard it 3 or 4 times and that’s all it takes and it’s there.”

A short test piece they created includes sounds like fire crackling to signify heat and chirping crickets to evoke nature. It’s designed for a full up surround sound system for full effect so you may not pick up the details quite the way you would if you were inside the cube. For the next year and a half, they’ll be testing it with a grant from the National Science Foundation.

Robbie Harris is based in Blacksburg, covering the New River Valley and southwestern Virginia.
Related Content