Friday, December 07, 2007

Reducing AGI complexity: copy only high level brain design

In my previous post Complexity and incremental AGI design I claim that complexity has very serious impact on AGI development.
If we want to improve our chances of successful AGI implementation, we need to cut complexity as much as possible.
In this post I want to touch the topic of copying human brain design while developing AGI.
Human brain structure is very complex it's almost impossible to describe in details how exactly brain works.
Richard Loosemore explains why this is the case:
Imagine that we got a bunch of computers and connected them with a network that allowed each one to talk to (say) the ten nearest machines.

Imagine that each one is running a very simple program: it keeps a handful of local parameters (U, V, W, X, Y) and it updates the values of its own parameters according to what the neighboring machines are doing with their parameters.

How does it do the updating? Well, imagine some really messy and bizarre algorithm that involves looking at the neighbors' values, then using them to cross reference each other, and introduce delays and gradients and stuff.

On the face of it, you might think that the result will be that the U V W X Y values just show a random sequence of fluctuations.

Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just random mush, there are some (a noticeably large number in fact) that have overall behavior that shows 'regularities'. For example, much to our surprise we might see waves in the U values. And every time two waves hit each other, a vortex is created for exactly 20 minutes, then it stops. I am making this up, but that is the kind of thing that could happen.

2) The algorithm is so messy that we cannot do any math to analyze and predict the behavior of the system. All we can do is say that we have absolutely no techniques that will allow us to mathematical progress on the problem today, and we do not know if at ANY time in future history there will be a mathematics that will cope with this system.

What this means is that the waves and vortices we observed cannot be "explained" in the normal way. We see them happening, but we do not know why they do. The bizarre algorithm is the "low level mechanism" and the waves and vortices are the "high level behavior", and when I say there is a "Global-Local Disconnect" in this system, all I mean is that we are completely stuck when it comes to explaining the high level in terms of the low level.

Believe me, it is childishly easy to write down equations/algorithms for a system like this that are so profoundly intractable that no mathematician would even think of touching them. You have to trust me on this. Call your local Math department at Harvard or somewhere, and check with them if you like.

As soon as the equations involve funky little dependencies such as:

"Pick two neighbors at random, then pick two parameters at random from each of these, and for the next day try to make one of my parameters (chosen at random, again) follow the average of those two as they were exactly 20 minutes ago, EXCEPT when neighbors 5 and 7 both show the same value of the V parameter, in which case drop this algorithm for the rest of the day and instead follow the substitute algorithm B...."

Now, this set of computers would be a wicked example of a complex system, even while the biggest supercomputer in the world, following a nice, well behaved algorithm, would not be complex at all.

The summary of this is as follows: there are some systems in which the interaction of the components are such that we must effectively declare that NO THEORY exists that would enable us to predict certain global regularities observed in these systems.


So, if low level brain design is incredibly complex - how do we copy it?

The answer is: "we don't copy low level brain design".
Low level design is not critical for AGI. Instead we observe high level brain patterns and try to implement them on top of our own, more understandable, low level design.

Complexity and incremental AGI design

Why is it so hard to build Artificial General Intelligence (AGI)?
It seems we have almost everything we need: great hardware, mature software development industry, Internet, Google, lots of successful narrow AI project ... but AGI is still to hard to crack.

The major reason is -- overall complexity of building AGI.

Richard Loosemore is writing about it:
Do we suspect that complexity is involved in intelligence? I could present lots of reasoning here, but instead I will resort to quoting Ben Goertzel: "There is no doubt that complexity, in the sense typically used in dynamical-systems-theory, presents a major issue for AGI systems"
Can I take it as understood that this is accepted, and move on?
So, yes, there is evidence that complexity is involved.


Richard also explains, how exactly complexity affects system development:
when you examine the way that complexity has an effect on systems, you find that it can have very quiet, subtle effects that do not jump right out at you and say "HERE I AM!", but they just lurk in the background and make it quietly impossible for you to get the system up above a certain level of functioning. To be more specific: when you really allow the symbol-building mechanisms, and the learning mechanisms, and the inference-control mechanisms to do their thing in a full scale system, the effects of tiny bits of complexity in the underlying design CAN have a huge impact. One particular design choice, for example, could mean the difference between a system that looks like it ought to work, but when you set it running autonomously it gradually drifts into imbecility without there being any clear reason.


The is a good technique of dealing with complex system -- increase complexity gradually and carefully test every step.
That's why I think it's so important to build testable narrow AI systems prior to building AGI.
We have many Narrow Artificial Intelligent Systems already, but we need more. And we need them to become more advanced up to the point when they become AGI.

Tuesday, May 01, 2007

Self-emergence of intelligence in humans and artificial systems

Human brain is self-emergent on many levels. Here's simplified sequence of human brain self emergence:
1) Human genes build "Brain Builder". Brain Builder consists of:
- Neurons Factory – neurons with reproductive ability.
- Brain Structure Manager – hormones and other mechanisms that define brain structure.

2) Brain builder builds "Empty Brain" --- fully assembled, but mostly empty brain: super goals are defined, but there is no external knowledge yet, no sub-goals defined yet.

3) By experimenting and learning Empty Brain evolves into Brain with Mind (fully working intelligent system, with lots of external knowledge and sub goals).

Every step in this sequence means self-emergence.

What do you think, when we build artificial intelligent system, what system should we build: Genes, Brain Builder, Empty Brain, or Brain with Mind?

I believe that building Empty Brain is our best option.
Below are my reasons.

Why not build Brain with Mind?

In order to build Brain with Mind we have to build Empty Brain anyway, but our task will be considerably more complex, because fully loaded mind is at least 10 times more complex than Empty Brain. It's like complexity of empty computer in comparison with complexity of all software which is loaded into regular "in use" computer.
Bottom line: there is no point to ai developers to pre-load mind into strong AI, when Empty Brain system can do it itself.


Why not build Brain Builder?

Complexity of Brain Builder is probably comparable with complexity of Empty Brain. But from engineering perspective developing Brain Builder is considerably more complex.
1) Let assume that we didn’t have designed Empty Brain yet. In this case we have no clue what the output of our Brain Builder should be. That means that we cannot test or debug Brain Builder. There are no checkpoints to verify that our development is on the right track.
Inability to test and debug complex system makes development of such system virtually impossible.
The only working approach in this situation would be to try to tweak some Brain Builder’s settings and then run full test: build Empty Brain and wait for several years to check if it evolves into Brain with Mind.
Mother Nature was quite efficient in this approach. It took just few billions years to develop proper Brain with Mind. I doubt that human researchers applying such approach would accomplish the task considerably faster.

2) Let assume that we already designed working model of Empty Brain. In this case what’s the point to design Brain Builder? Our industry can easily reproduce any working model in mass quantity.


Why not build Genes?

Building Genes which would build Brain Builder is even more complex than building Brain Builder itself.
The reasons are the same as in "Why not build Brain Builder?"
If we don’t have working model of Brain Builder yet – then we effectively cannot test & debug genes.
If we have working model of Brain Builder – then why bother with Genes?


Parallels with existing systems

1) CYC is trying to build Brain with Mind system. Actually even worse – they are trying to build Mind without Brain --- no self-learning ability, no super-goals.
That road leads nowhere.

2) Google is Brain with Mind which was developed as Empty Brain. Google's Empty Brain has working crawler and other self-learning mechanisms. This approach proved to be very efficient, and eventually Google's Empty Brain emerged into Brain with Mind – very smart search system.

3) It seems that there are no famous Brain Builder projects. But I’m sure that some researchers do attempts to build "Brain Builder". So far – no success at all for the reasons I explained above.

Conclusion

Building Empty Brain capable of self-emerging into fully capable Brain with Mind -- is the most feasible engineering approach in strong AI development.


---
This post is a result of discussion with David Ashley. He is a proponent of "Brain Builder" approach.

Sunday, April 15, 2007

Intelligence: inherited through genes or gained from environment?

Human Intelligence is acquired from environment, not encoded genes.
Genes provide framework, which allow to learn from environment. This framework is critical for intelligence, but does not provide intelligence by itself.

===== By Richard Loosemore (2007 April 05) in AGIRI forum =====
If we were aliens, trying to understand a bunch of chess-playing IBM supercomputers that we had just discovered on an expedition to Earth, we might start by noticing that they all had very similar gross wiring patterns, where "gross wiring" just means the power cables, bundles of wires inside each rack, and wires laid down as tracks on circuit boards.
But nothing inside the chips themselves, and none of the "soft" wiring that exists in code or memory.

Having mapped this stuff, we might be impressed by how very similar the
gross wiring pattern was between the different supercomputers that we discovered, and so we might conclude that our discovery represented a significant advance in our understanding of how the machines worked.

.....

That last bit -- the [powerful algorithms that interact with the environment] bit -- is what makes the difference between a baby that sits there drooling and probing for its mother's nipple, and an adult human being who can understand the complexities of the human cognitive system.

Anyone who thinks that that last bit is also encoded in the human genome has got a heck of a lot of work to do ...
=====

Tuesday, February 20, 2007

Larry Page talks about AI

=====
Google's Page urges scientists to market themselves
Google co-founder Larry Page has a theory: your DNA is about 600 megabytes compressed, making it smaller than any modern operating system like Linux or Windows.
.....
"We have some people at Google (who) are really trying to build artificial intelligence and to do it on a large scale," Page said to a packed Hilton ballroom of scientists. "It's not as far off as people think."
=====

I agree with Larry Page: human's DNA has relatively small size.
Besides, not all human DNA is in charge of the brain. I guess that something like 10% of the whole DNA is related to brain development.

I wrote about that over 3 years ago:
-----
The time has come The time has come to develop Strong Artificial Intelligence System
Strong AI project is quite complex software project. However even more complex systems were implemented in the past. Many software projects are more complex than human DNA (note that human DNA contains way more than just genocode for intelligence).
-----

Sunday, January 07, 2007

Should Strong AI have its own goals?

Short answer: Yes and No.
Long answer: Strong AI can add and modify millions of softcoded goals. At the same time Strong AI shouldn't be able to change its own super goals.
Why?

Here are the reasons:

1) In its normal working cycle strong AI modifies softcoded goals in complience with embedded super goals. If strong AI has ability to modify super goals then strong AI will modify (or terminate) super goals instead of achieving these goals.
Example:
Without ability to modify super goal "survive", computer will try to protect itself, will think about power supply, safety and so on.
With ability to modify super goals computer would simply terminate goal "survive" and create goal "do nothing" instead just because it's the easiest goal to achieve. Such "do-nothing" goal would result in the death of this computer.


2) If Strong AI can change its super goals then Strong AI would work for itself instead of working for its creator. Strong AI's behavior would eventually become uncontrollable by AI creator / operator.

3) Ability to reprogram its own super goals makes computer behave like a drug addict.
Example:
Computer can create new super goal for itself: "listen to music" or "roll the dices" or "calculate PI number" or "do nothing". It would result in Strong AI doing useless stuff or simply doing nothing. Final point: uselessness for society and death.