The diagram below is based on one that Bee drew, the lower part of the diagram is what I've added.

A Google research team built a machine, a neural network with a billion connections, using a computing array with 16,000 processors. (More on neural networks below.) They then trained the network on ten million digital images chosen at random from YouTube videos. They used unsupervised training, i.e., this machine was not given any feedback to evaluate its training. And, lo and behold, the neural network was able to recognize cats (among other things) - (from The NY Times)Google Fellow Dr. Jeff Dean - "It basically invented the concept of a cat".

A bit about neural networks - this slide is from Prof. Yaser Abu-Mostafa's telecourse and depicts a generic neural network.

Very simply, the inputs

**x**on the left - in our case, some set of numbers derived from the digital images - are multiplied by other numbers, called weights, and summed up. These are fed to the first layer of the neural network. Each "neuron" (the circles marked with

*θ*) is fed with its own weighted sum,

*s*. The neuron uses the function

*θ(s),*which is a function of the shape shown above, to convert the input into its output. The outputs of the first layer of neurons is used as inputs to the next layer and so on. The above network has a final output h(x) but you can build a network with many outputs. So, e.g., for a neural network successfully trained to recognize cats, h(x) could be +1 if the inputs

**x**correspond to a cat and to -1 if the inputs correspond to a not-cat.

Training simply involves choosing the weights. The learning algorithm is some systematic way of picking weights and refining their values during the training algorithm.

It is hypothesized that our brains work in very much the same way. Our brains probably undergo supervised training. In the above network, supervised training would mean that you feed it one of the images, look at the result the network gives (cat or not-cat), and then tell the learning algorithm whether the result was right or wrong. The learning algorithm correspondingly changes the weights. Eventually if the network is successful in learning the weights converge to some stable values. I say our brains underwent supervised training, because natural selection would tend to wipe out anyone who misperceived reality.

The Google experiment used unsupervised learning, which to me, makes its discovery of cat all the more remarkable.

[I should probably say a little more about biological neural networks. The neuron receives inputs usually from other neurons across junctions called synapses. Some inputs are excitatory and some are inhibitive. The strength of signal received depends on properties of the synapse. If the overall strength of inputs cross some threshold, the neuron fires, otherwise it remains quiescent. (The analogs of the threshold and quiescent/firing behavior is handled in our machine neural network above by the inputs from the circles containing the 1s and the

*θ(s)*function respectively. ) I hope it is clear why the machine network above is naturally called a neural network.]

Arriving at the main theme, if what the neural network (and our brains) have assembled by perception of the environment is to be termed knowledge, where did it fit on Bee's original chart? The collection of weights and connections in the neural network comprise at best an implicit model of cats. It is these implicit models, presumably derived from the Real World Out There, that we are conscious of as objects in the Real World Out There. Because we have been "trained" by evolution, the objects we perceive in fact, do mostly correspond to objects in the Real World Out There.

Once we have Theory, we now have abstract concepts on top of which to build further, and we eventually arrive at ideas, such as atoms and molecules, that are not directly there in our perceptions, but are there in the real world. Perhaps our difficulties with quantum mechanics are that our brains have not had to handle inputs with quantum mechanical features; but perhaps it is not a limitation of neural networks, only a limitation of our perceptions. So perhaps a super-Google neural network could be trained to "understand" quantum mechanics in a way that we never can. That is, until we build additional "senses" - brain-machine interfaces - that can feed additional data into our brains, to train the neural networks in our heads.

Further, our brains are definitely biased by the demands of survival. Now we have, in principle, the ability, via unsupervised learning in very large neural networks, to find out more about the Real World Out There, without that bias imposed. New means of knowledge????