^
1
^
Georgi Georgiev 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 7
Eventually, in the ultimate expression of our Enlightenment exuberance, we constructed digital computers, the very embodiments of cause and effect. Computers are the cathedrals of the Enlightenment, the ultimate expression of logical deterministic control[1]. Through them, we learned to manipulate knowledge, the currency of the Enlightenment, beyond the capacity of our own minds. We constructed new realities. We built complex algorithms with unpredictable behavior. Thus, within this monument to Enlightenment thinking, we sowed the seeds of its demise. We began to build systems with emergent behaviors that were beyond our own understanding, creating the first crack in the foundation.
We began to build systems with emergent behaviors that were beyond our own understanding, creating the first crack in the foundation.
Do you mind giving an example?
^
4
^
Danny Hillis 3/14/2016
Permalink|Reply
Private. Collaborators only.
Programmed trading systems on the stock market.
^
1
^
Andrew Menzer 7/29/2016
Permalink|Reply
Private. Collaborators only.
Machine learning algorithms that technology companies implement in their products (Google search etc) are beyond engineers’ full comprehension.
There’s no way to know how a neural net makes the associations it does (in a discrete, Enlightenment-closed system sense) as the probablisitic associations it derives happen at a layer hidden from the designer/engineer. It “just works” and gets more accurate as users feed it more data.
One of the more interesting real world implications of the what I describe is the EU antitrust cases against Facebook and Google.
In Google’s case, no one can fully account for why Google search recommends their own products over others. It’s quite possible the AI that makes interwoven through Google’s products (it’s about 1 billion lines of code) was recommending it’s own apps and services as users fed it more data. The system logically concluded that since they’re part of the same platform users would simply find it convenient. No one knows though. The neural net’s inputs are too numerous (every Google user’s uploads and searches etc) and varied to fully account for and it’s physically impossible to fully visualize a neural net’s hidden layer in a easy to understand way.
^
1
^
Jon Henrich 4/28/2016
Permalink|Reply
Private. Collaborators only.
A.I. would be an example of this, no? Much like the discovery of nuclear power. During testing and implementation, the consequences were not fully known. It was a milestone as well as a productive and destructive power.
"Thus, within this monument to Enlightenment thinking, we sowed the seeds of its demise."
- Danny Hillis
^
1
^
Danny Hillis 3/14/2016
Permalink|Reply
Private. Collaborators only.
Programmed trading sytems on the stock market.