Tuesday, March 01, 2005

Simple AI as a necessary prototype for complex AI

Jiri,

1) Goals defined by operator are even more dangerous.
2) You can load data from CYC, it this data wouldn't become knowledge. Therefore it wouldn't be learning. And wouldn't be useful.
Goals are still necessary to learn. Only the goals give sense to learning.

3) Why would long question cause "no answer found" result? Quite contrary --- the longer the question, the more links to possible answers could be found.

4)
>> Bottom line: "Generalization is not core AI feature".

> It's not a must for AI, but it's a pretty important feature.
> It's a must for Strong AI. AI is very limited without that.

- I have ideas about how to implement generalization feature.
Would you like to discuss these ideas?
- I think that it's not a good idea to implement generalization in the first AI prototype.
Do you think that generalization should be implemented in the first AI prototype?


5)
> "Ability to logically explain the logic" is just useful for invalid-idea
> debugging.
> So I recommend to (plan to) support the feature.

All features are useful. The problem is --- when we put too many features into software project --- it's just dies.
That's why it's important to correctly prioritize the features.

Do you think that logic should be implemented in the first AI prototype?

50 years of trying to put logic into the first AI prototype proved that it's not very good idea.



6) Reasoning tracking
> It's much easier to track "reasons for all the (sub)decisions"
> for OO-based AI.

No, it's not easier to track reasoning in AI than in natural intelligent system.
Evolution could code such ability. But the evolution didn't cover 100% tracking of reasoning.
There are very essential reasons for avoiding 100% reasoning tracking.
Such tracking simply makes intelligent system more complex, slower, and therefore very awkward.
And intelligent system is very fragile system even without such "tracking improvement".

Bottom line: First AI prototype doesn't need to track process of its own reasoning. Only reasoning outcomes should be tracked.


7) AIML
> Your AI works more-less in the AIML manner. It might be fun to play
> with, but it's a dead end for serious AI research.
> AIML = "Artificial Intelligence Markup Language", used by Alice and
> other famous bots.

Does AIML have ability to relate every concept to each other?
Do these relations have weights?
Does one word correspond to one concept?
Is learning process automated in Alice?
Is forgetting feature implemented in Alice?


8)
>>If I need 1 digit precision, then my AI needs just to remember few hundred
>>combinations
> searching for stored instances instead of doing real
> calculation is a tremendous inefficiency for a PC based AI.

Calculation is faster than search. But... only if you already know that calculation is necessary. How would you know that calculation is necessary when you parse text?
The only way --- is find what you have in your memory. So you can just find the answer.

But yes, sometimes required calculations are not that easy. In this case the best approach would be to extract approximate results from the main memory and make precise calculations through math functions.
And again, this math functions integration is not top-priority feature. Such feature is necessary for technical tasks, not for basic activity.


>> Intelligence is possible without ability to count.

> Right, but the ability is IMO essential for a good problem solver.

Correct, but you/me/whoever cannot build good problem solver in the first AI prototype anyway.


9) Design is limited, but not dumb
> Don't waste time with a dumb_AI design.

Design is not dumb, it's limited. And can be extended with the second AI prototype. Feel the difference.


10) Real life questions
> If I say obj1
> is above obj2 and then ask if the obj2 is under the obj1 then I expect
> the "Yes" answer based on the scenario model the AI generated in its
> imagination. Not some statistical junk.

This is not real life question to AI.
Far more probable questions are: "here is my resume, please, give me matching openings" or "I'm looking for cell phone with X Y Z features, my friends have P, Q plans, what would you recommend?".

Limited AI can be used for answering these questions.


11) The first AI prototype's target on intelligent jobs market
> AI's ability to produce unique and meaningful thoughts. To me, that's
> where the AI gets interesting and I think it should be addressed in
> the early design stages if you want to design a decent AI..

Humans do all kind of intelligent jobs. Some of them are primitive (like first level tech support), some of them are pretty complex (scientist / software architect / entrepreneur / ...).

It's natural if first AI prototype would try to replace humans on primitive intelligent jobs first. Do you agree?

It's practically impossible to build the first AI prototype which will replace humans on the most advanced intelligent jobs. Agree?


12) "brain design" vs "math calculator"
> don't you see that it's a trully desperate attempt to use
> our brain for something it has an inappropriate design for? The human
> brain is a very poor math-calculator. Let me remind you that your AI
> is being designed to run on a very different platform..

Let me remind you that human brain is far better problem solver than any advanced math package.
Modern math package is not able to solve any problem without human's help.
Human can solve most of the problems without math package.

Think again, what exactly is missing in modern software?
Make your conclusion what the core AI features are.

The platform is irrelevant here ---
So what that you can relatively easy to add calculator feature to the AI. The calculator feature is not critical to intelligence at all. Therefore it would just make the first AI prototype more awkward and more time consuming in development.
Do you want that?


13) Aplicability of math skills to real-life problems
>>> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.
>> How would you reuse this math ability in decision making process like:
>> "finding electrical power provider in my neighborhood"?

> I do not think it would be useful for that purpose (even though a
> powerful AI could make a different conclusion in a particular
> scenario). The point is that general algorithms are useful in many
> cases where particular instance of the algorithm based solution is not
> useful at all.

Do you mean that you have some general algorithm which allows to solve both "Pythagoras Theorem" and "finding electrical power provider in my neighborhood" question?
What is this general algorithm about?


14) Advanced Search
> I do not know how exactly google sorts the results but it seems to
> have some useful algorithms for updating the weights. Are you sure
> your results would be very different?

Yes, they would be different:
1) Google excludes results which doesn't have exact match
2) Google doesn't work with long requests
3) Google has limited ability to understand natural language
4) Google doesn't follow interactive discussion with the user
I have some ideas how to improve final search results. But the first step would be still search on Google :-)
Because of performance and information gathering issues.


> Since you work on a dumb AI which IMO
> does not have a good potential to become strong AI, the related
> discussion is a low priority to me.

Again, it's not dumb. It's limited because it's just the first prototype.

Do you prefer waterfall development process or Rapid Application Development (RAD) in software development?
What about your preferences in research and development?

No comments: