Tuesday, February 22, 2005

Simple AI as a necessary prototype for complex AI

Jiri,

> Call it goal or "attraction point", I would not recommend to hardcode it.

Sometimes it's easier to code, than to implement "goal designer" for administrator.

> There should be some sort of Add/Edit/Delete mode for that (possibly
> for Admin(s) only).

You can implement one of hardcoded goals in form of "obey to administrator".
This is still the option.

> But I think you should be able to describe the source scenario, the
> target scenario and optionally some rules which cannot be broken. Then
> The AI should generate solution steps (assuming it has relevant
> resources).

- Babies don't understand "source scenario", but they are still can learn. Why AI cannot be the same?
- I think you're still missing the point of what the hardcoded goals are.
Goal is not final point in (self)development.
Humans have hardcoded goals, but they don't have hardcoded goals like "become a president of the US" or "earn $1 billion". These two examples are softcoded goals.

Bottom line:
In order to make AI to achieve such "high level goals", operator/admin has to carefully design set of "simple goals".

Most probably "simple goals" and "high level goals" would be different.


>> Goals provide direction of learning/self development.

> no need for "hardcode". Editable = better.

It is impossible to educate without "desire to learn" (read: "learning hardcoded goals") already implemented in AI.



> you need to implement imagination in order to develop
> decent AI. I mean the AI needs to be able to generate some sort
> of model of the scene it thinks about. Not necessarily a 3D simulation
> but some type of model it could play with in it's mind.

Model - yes.
Visual model - not necessarily (hint: blind people are still intelligent).

Actually whole memory structure is designed for building models (concepts and relations between concepts).


>> Hardcoded goals should evaluate feedback and make conclusions.
>> Not necessarily logical conclusions. More like emotional conclusions.

> I think everything should be logical and the AI should be able to explain > the logic whenever requested by an authorized user..

It's nice to have such ability, but... not necessarily.
Ability to logically explain the logic is coming with experience, education, a lot of thinking, conversations, and time.
Children mostly cannot logically explain why they behave in a certain way. But they still learn.

Adults have limited ability to logically explain why they behave in a certain way.

Only the most logically advanced humans can logically explain almost everything.
BTW, these "logically advanced" humans are not necessarily the most successful ones :-)

Bottom line: logical explanation ability is not a core AI feature.


> I think when you move to the complex problem solving, you will find out
> that the basic features you are playing with now are not so useful..
> When do you think you will be ready for the complex AI?

Not soon :-(

I need to implement simple AI first. It also takes a lot of time --- you know.

But what I know for sure --- if I try to implement complex AI (strong AI) as my first AI prototype - I would definitely fail.

Agree?

> I think you need a demo to see the problem.
> Why don't you code it?

Time. Development always takes a lot of time. Especially Research and Development.


>> Majority of humans' decisions are BASED on this statistical factor.
>> This majority consists of very simple problems/decisions though. (Like
>> if I see "2 + 2" then I remember "4").

> It's funny you have used this example..
> The world of math alone is a killer for your AI.

It's not exactly a math. It's just remembering right answer in a particular case.

> You cannot store all that info.

Why not?
If I need 1 digit precision, then my AI needs to remember just few hundred combinations like:
==========
1 + 1 = 2
1 + 2 = 3
...
9 + 9 = 18
...
9 * 9 = 81
...
==========

Also my AI would use special math functions for calculations :-)

> The system needs to understand symbols "2", "+", "4", "=" separately.

Yes, but in a limited way.
Concept "2" may have relations with "1 + 1", "0 + 2", and "1 * 2".
"=" may be associated with internal math calculator. And with "2*2 = 4".
Etc.

Well, anyway, all these stuff is not for the nearest prototypes
:-)

>> More complex decision making would be unavailable for this "statistical"
>> approach. BUT(!) --- this "statistical" approach would help to quickly
>> find limited set of possible solutions.

> Does not sound like an interesting AI to me.

I think that "simple AI features set" is:
#1 - required for simple AI implementation.
#2 - sufficient for simple AI implementation.
#3 - required for complex AI implementation.
#4 - not sufficient for complex AI implementation.

What statements (#1, #2, #3, #4) do you agree/disagree with?

>> Keep in mind, that more complex algorithms are too slow and cannot solve
>> the problem without simple "statistical" algorithm.

> Yes, but not with your type of "statistical" algorithm.
> The system needs to be able to work with "formulas"
> and parameter-variables, not just remembering "formula"-instances.
> with particular parameter-instances without being able to
> automatically reuse the "formulas" using various parameter values.

Most of the humans are not able to work with formulas.
They are still intelligent though.

> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.
> Then it can use it for triangles of all sizes.

How would you reuse this math ability in decision making process like: "finding electrical power provider in my neighborhood"?

I think your algorithm would be not reusable at all.


> Your AI (as I understand it) can solve related question only
> if it finds an example with the particular numbers in it's memory.

You understand it almost right.
The only correction is that: "AI would be use external knowledge, like Google or other intelligent experts".

> It cannot handle the general way of thinking.

Searching in internal/external memory is 90% of general way of thinking.
Another 9% is results evaluation against set of goals (both hardcoded and softcoded).

And another 1% is inventions. This 1% is:
- not necessary.
- is impossible without "memory search" and "results evaluation".

> That's just terribly limited/inefficient. I said "formula" and I used a
> math example but this applies to all kinds of processes the AI needs to
> understand in order to solve something.

What is your structure of memory which would be flexible enough to keep heterogeneous information?
How functionality reuse would be implemented in your memory structure?

2 comments:

Anonymous said...

Personally, I'd say your closer to being on the right track than the other guy that was quoted.
How would it communicate concepts, after all we get a lot of what we learn based on feedback from other intelligent entities, especially on moral and ethical issues. What about getting the AI to identify "intelligent" entities? What about the differences amoung inanimate alive (i.e. plants, mold, etc), inanimate "not alive", and animate entities in it's world. How does it detect these?

Dennis Gorelik said...

To "anonimous":
1) AI doesn't have to identify intelligent entities.
What AI should be able to do is to differentiate useful and useless sources of information.

2) Why "anonimous"?
Registration in blogger is simple and free.