^
1
^
Andrew Menzer 7/29/2016
Permalink|Reply
Private. Collaborators only.
Machine learning algorithms that technology companies implement in their products (Google search etc) are beyond engineers’ full comprehension.
There’s no way to know how a neural net makes the associations it does (in a discrete, Enlightenment-closed system sense) as the probablisitic associations it derives happen at a layer hidden from the designer/engineer. It “just works” and gets more accurate as users feed it more data.
One of the more interesting real world implications of the what I describe is the EU antitrust cases against Facebook and Google.
In Google’s case, no one can fully account for why Google search recommends their own products over others. It’s quite possible the AI that makes interwoven through Google’s products (it’s about 1 billion lines of code) was recommending it’s own apps and services as users fed it more data. The system logically concluded that since they’re part of the same platform users would simply find it convenient. No one knows though. The neural net’s inputs are too numerous (every Google user’s uploads and searches etc) and varied to fully account for and it’s physically impossible to fully visualize a neural net’s hidden layer in a easy to understand way.