The Facebook fiasco is calling attention to an application known as 'machine learning.' That's where computer systems benefit from their experience, and automatically 'improve' their performance going forward. Could the same be in store for the tech industry?
Behind every web-based system is a hard-working algorithm telling a computer what to do. And for more than a decade, computer scientists who create them have been worrying about, well, exactly what’s happening now with Facebook. They’ve been meeting to discuss how to be introduce concepts like fairness, accountability and transparency into the equation.
Bert Huang is a computer scientist at Virginia Tech. "It is something we’ve been aware of and have been trying to prepare the world for, but it's moving faster than the scientific community can fully control," he says.
Huang has been working with colleagues on ways to improve the upside of algorithms, like finding facts or friends fast, and protect the down sides, some of which, he says, are overblown.
“There’s a lot of worry about these algorithms being able to ‘figure you out,’ with the data we provide on Facebook or Google or whatever — that 'someone will figure out my deepest darkest secrets.' That’s an unfounded fear because the algorithms are just not that smart yet."
And he doesn't think they ever really will be. Huang points out that machine learning is best for crunching huge amounts of data and handling repetitive tasks. It still takes people to smooth out the subtleties and lend a human hand to the process.
That is, at least for now.
But now that these issues of privacy, control of one's data, and business models based on mining it have become public, perhaps developers, like their machines, might learn from their mistakes.