Sunday, July 24, 2005



I cannot find it now on your site, but, it seems your system has or will have the opposites to goals (was it goals with negative desirability?)

Answer:In general, same supergoal works in both negative and positive directions.
Super goal can give both positive and negative reward to the same concept.
For example, supergoal "Want more money" could give negative reward to "Buy Google stock" concept, responsible for investment money into Google stock, because it caused money spending. One year later same "Want more money" supergoal may give positive reward to the same "Buy Google stock" concept, because this investment made the system richer.

Supergoal: "can act" or "state only"?

Supergoals can act. Supergoal actions are about modification of softcoded goals.
Usually Supergoal has state. Typically supergoal state keeps information about supergoal satisfaction level is at this moment. Supergoal may be stateless too.

Thursday, July 21, 2005

Glue for the system

it seems to me, that you use cause-effect relations as a glue to put concepts together, so they form a connected knowledge; is it the only glue your system has?

Yes, correct: cause-effect relations are the only glue to put concepts together.
I decided to have one type of glue instead of many types of glue.
It's easier to work with one type of glue.

At the same time I have something else that you may
consider a glue for the whole system:
1) Desirability attributes (softcoded goals)- keep information about system's priorities.
2) Hardcoded units - connect concepts to the real world. Super goals are the special subset of these hardcoded units.

Monday, July 18, 2005

What AI ideas has Google introduced?

Google not introduced, but practically demonstrated the following ideas:

1) Words are the smallest units of intelligent information. Word alone has meaning. Letter alone - doesn't. Google searches for words as a whole. Not for letters of substrings.

2) Phrases are important units of information too. Google underlines importance of phrases by supporting search in quotes, like "test phrase".

3) Natural language (plain text) is the best way to share knowledge between intelligent systems (people and computers).

4) Programming languages that are the best for mainstream programming - the same languages are the best for intelligent system development. LISP, Prolog, and other artificial programming languages are less efficient in intelligence development than mainstream languages like C/C++/C#/VB/: (Google proved this idea by using plain C as a core language for "advanced text manipulation project".

5) Huge knowledge base does matter for intelligence. Google underlines importance of huge knowledge base.

6) Simplicity of knowledge base structure does matter. In comparison with CYC's model, Google's model is relatively simple. Obviously Google is more efficient/intelligent than dead CYC.

7) Intelligent system must collect data automatically (by itself, like in Google's crawler). Intelligent system should not expect to be manually fed by developers (like in CYC).

8) To improve information quality, intelligent system should collect information from different types of sources. Google collects web pages from web, but also it collects information from Google toolbar - about what web pages are popular among users.

9) Constant updates and forgetting keeps intelligent system sane (Google constantly crawls the Web, adds new and deletes dead web-pages from its memory).

10) Links (relations) add intelligence to a knowledge base (Search engines made the Web mode intelligent);
Good links convert knowledge base into intelligent system (Google's index with web work as a very wise adviser (read: intelligent system)).

11) Links must have weights (like in Google's Page rank). These weights must be taken into consideration in decision making.

12) Couple of talented researchers can do far more than lots of money in wrong hands. Think about "'Serge Brin & Larry Page search' vs 'Microsoft's search'".

13) Sharing ideas with public helps research project to come to production. Hiding ideas - kills the project in the cradle. Google is very open about its technology. And very successful.

14) Targeting practical results helps research project a lot. Instead of having "abstract research about search", Google targeted "advanced web-search". Criteria of success of the project were clearly defined. As a result Google project quickly hit production and generated tremendous outcome in many ways.

Sunday, July 17, 2005

How does strong AI schedule super goals?

Strong AI doesn't schedule super goals directly. Instead strong AI schedules softcoded goals. To be more exact, super goals schedule softcoded goals by making them more/less desirable (see Reward distribution routine). The more desirable softcoded goal is – the higher probability is that this softcoded goal will be activated and executed.

How strong AI finds a way to satisfy super goal

The idea is simple: whatever satisfies super goal now -- most probably would satisfy the super goal in the future. In order to apply this idea, super goals must be programmed in a certain way. Every super goal itself must be able to distinguish what is good and what is bad.
Such approach makes super goal kind of "advanced sensor".
Actually not only "advanced sensor", but also "desire enforcer".

Here's the example how it works:
Super goal’s objective: to be rich.
Super goal sensor implementation: check strong AI’s bank account for amount of money on it.
Super goal enforce mechanism: mark every concept which causes increasing the bank account balance as "desirable". Mark every concept which causes decreasing the bank account balance as "not-desirable".

Note: "mark concept as desirable/undesirable" doesn't really work in "black & white" mode. Subtle super goal enforcement mechanism either increases or decreases desirability of every cause concept affecting the bank account balance.

Concept type

Your concepts have types: word, phrase, simple concept and periheral device. What is a logic behind having these types?
In fact "peripheral device" is not just one type. There could be many peripheral devices.
Peripheral device is a subset of hardcoded units
Concept can be of any hardcoded unit type.
Moreover, one hardcoded unit can be related to concepts of several types.
For example: text parser has direct relations with concept-words and concept-phrases. (Please don't confuse these "direct relations" with relations in the main memory).
Ok, now we see that strong AI has many concept types. How many? As many as AI software developer code in hardcoded units. 5-10 concept types is a good start for strong AI prototype. 100 concept types is probably good number for real life strong AI. 1000 concept types is probably too many.

So, what is a "concept type"? Concept type is just a reference from concept to hardcoded unit. Concept type is a reference from concept to real world through a hardcoded unit.

What concept types shold be added to strong AI?
If AI developer feels that concept type XYZ is useful for strong AI...
and if the AI developer can code this XYZ concept type in hardcoded unit...
and if this functionality is not implemented in other hardcoded unit yet...
and the main memory structure doesn't have to be modified to accomodate this new concept type...
then the developer may add this XYZ concept type to strong AI.

What concept types should not be added?
- I feel that such concept types as "verb" and "noun" should not be added, because there is no clear algorithm to distinguish between verbs and nouns.
- I feel that "property concept type" should not be used, because "property concept type" is already covered by "cause-effect relationships" and because implementation of property type concepts will make main memory structure more complex.

How naked is a concept?

There is a concept ID, which you use when referring to some concept. When coding, everyone will have these IDs, the question is how "naked" they are, i.e. how they are related to objective reality.

Concept alone is very naked. Concept ID is a core of a concept.
Concept is related to objective realitythrough relations to other concepts.
Some concepts related to objective reality through special devices.
Example of such device could be text parser.
Example of connection between concept and objective reality: temperature sensor connected to temperature sensor concept.

Saturday, July 16, 2005

What learning algorithms does your AI system use?

Strong AI learns in two ways:
1. Experiment.
2. Knowledge download.
See also: Learning.

What do you use to represent information inside of the system?

From "information representation" point of view there are two types of information:
1) Main information - information about anything in the real world.
2) Auxiliary information - information which helps to connect main information with the real world.
Examples of auxiliary information: words, phrases, email contacts, URLs, ...

How main information is represented

Basically main information is represented in form of concepts and relations between concepts.
From developer's perspective all concepts are stored in Concept
. All relations are stored in the Relation table.

Auxiliary information representation

In order to connect main information to the real world AI needs some additional information. Like human brain's cortex cannot read, hear, speak, or write --- the same way main memory cannot directly be connected to the real world.
So, AI needs some peripheral devices. And this devices needs to store some internal information for itself. I name all this information for peripheral devices: "auxiliary information".
Auxiliary information is stored in the tables designed by AI developer. These tables are designed on the case-by-case basis. Architecture of a peripheral module is taken into consideration.
For example, words are kept in WordDictionary table, phrases are kept in PhraseDictionary table.
As I said: auxiliary information connects main information with the real world.
Example of such connection:
Abstract concept of "animal" can relate to concepts "cat", "tiger", and "rabbit". Concept "tiger" can be stored in the word dictionary.
In addition to that Auxiliary information may or may not be duplicated as main information.
Text parser may read word "tiger", find it in the word dictionary: Then AI may meditate on the "tiger" concept and give back some thoughts to the real world.