Colloquium on the Law of Transhuman Persons
There are photos here how they disscussed law related to transhumans. Florida's beach pictures included :-)
Friday, December 16, 2005
Thursday, December 15, 2005
How to prevent bad guys from using results of AI reserch?
David Sanders> I would like to see a section up on your site about the downsides of AIS and what preventative limits need to take place in research to ensure that AIS come out as the "good" part of humans and not the bad part. The military is already building robotic, self propelled and thinking vehicles with weapons.
Recipe for "safe from bad guys research" is the same as recipe for any
research: openness.
When ideas are available for society - many people (and later many
machines) would compete in implementation of these ideas. And society
(human society / machine society / or mixed society) - would setup
rules which would prevent major misuse of new technology.
David Sanders> How long do we really have before an AIS, demented or otherwise) decides to eliminate its maker?
Why would you care?
Some children kill their parents. Did our society collapsed because of
that?
Some AISes would be bad. Bad not just toward humans, but toward other
AISes.
But as usual --- bad guys wouldn't be a majority.
David Sanders> As countless science fiction stories have told us, even the most innocent of actions by an AIS may spell disaster,
1) These are fiction stories.
2) Some humans can cause disasters too, so what?
David Sanders> because like I said above the don't fundamentally understand us, and we don't understand them.
Why wouldn't AISes understand humans?
David Sanders> We will be two completely different species, and they might not hold the same sanctity of life most of us are born with.
Humans are not born with sanctity. Humans gain it (or not gain) while
they grow.
Same would apply to machines.
Recipe for "safe from bad guys research" is the same as recipe for any
research: openness.
When ideas are available for society - many people (and later many
machines) would compete in implementation of these ideas. And society
(human society / machine society / or mixed society) - would setup
rules which would prevent major misuse of new technology.
David Sanders> How long do we really have before an AIS, demented or otherwise) decides to eliminate its maker?
Why would you care?
Some children kill their parents. Did our society collapsed because of
that?
Some AISes would be bad. Bad not just toward humans, but toward other
AISes.
But as usual --- bad guys wouldn't be a majority.
David Sanders> As countless science fiction stories have told us, even the most innocent of actions by an AIS may spell disaster,
1) These are fiction stories.
2) Some humans can cause disasters too, so what?
David Sanders> because like I said above the don't fundamentally understand us, and we don't understand them.
Why wouldn't AISes understand humans?
David Sanders> We will be two completely different species, and they might not hold the same sanctity of life most of us are born with.
Humans are not born with sanctity. Humans gain it (or not gain) while
they grow.
Same would apply to machines.
Discussion about AIS weaknesses
This discussion inspired by web-page Weaknesses of AIS.
David Sanders> AIS cannot exist (for now) without humans.
That’s not really a weakness, because time span of this weakness would be pretty short. Right now strong AI systems exist only in our dreams. :-) Within ~20 years of creating strong AI, many AIS-es would be able to survive without humans. Please, note that AIS-es would not kill humans. There would be benefits of human-AIS collaboration for all sides. This is completely different topic though. :-)
David Sanders> If they fail to understand and appreciate the human world...
If you don't understand and appreciate human world of Central Africa... would it harm you?
May be you mean "If AIS-es don't understand human world at all"? But in this case what would these AIS-es understand? And what would mean that these not-understanding systems intelligent?
David Sanders> [AIS-es] Not able to perceive like a human. They cannot hear, see, feel, taste or smell like a human.
Not true. Only first and limited versions of AIS-es wouldn’t be able to perceive like a human. Sensor devices are not too hard to implement. The major problem is implementation of Main Mind for AIS.
David Sanders> They can only feel these things like they imagine they do. Again, this makes them fundamentally incongruous with humans and I don't believe its something you can "teach around." Try to explain what "blue" is to someone who never had sight.
Have you ever seen "black hole", "conscience", or "electron"? Yet you know what they are, don't you? :-)
Blind person can understand what "blue" means: "sky is blue", "water is blue", ...
David Sanders> Until AIS have robot bodies / companions, they rely on humans for natural resources. However, once the singularity hits, that probably won't matter anymore. It is not inconceivable to think of a time in 200-500 years there are no more humans, just AIS.
Humans would probably exist long after strong AI is created. Humans just would not be the most intelligent creatures anymore :-)
David Sanders> I disagree with AIS and natural selection. I think this will happen on its own by their very nature.
AIS-es can be influenced by natural selection as much as all other living organisms. But humans had millions of years of natural selection. When would AIS-es have that much?
David Sanders> AIS will be more open about self modification as you point out. AIS will be able to make other AIS and will soon learn how to evolve themselves very quickly.
"Evolving themselves" is part of artificial selection, not natural selection.
David Sanders> AIS cannot exist (for now) without humans.
That’s not really a weakness, because time span of this weakness would be pretty short. Right now strong AI systems exist only in our dreams. :-) Within ~20 years of creating strong AI, many AIS-es would be able to survive without humans. Please, note that AIS-es would not kill humans. There would be benefits of human-AIS collaboration for all sides. This is completely different topic though. :-)
David Sanders> If they fail to understand and appreciate the human world...
If you don't understand and appreciate human world of Central Africa... would it harm you?
May be you mean "If AIS-es don't understand human world at all"? But in this case what would these AIS-es understand? And what would mean that these not-understanding systems intelligent?
David Sanders> [AIS-es] Not able to perceive like a human. They cannot hear, see, feel, taste or smell like a human.
Not true. Only first and limited versions of AIS-es wouldn’t be able to perceive like a human. Sensor devices are not too hard to implement. The major problem is implementation of Main Mind for AIS.
David Sanders> They can only feel these things like they imagine they do. Again, this makes them fundamentally incongruous with humans and I don't believe its something you can "teach around." Try to explain what "blue" is to someone who never had sight.
Have you ever seen "black hole", "conscience", or "electron"? Yet you know what they are, don't you? :-)
Blind person can understand what "blue" means: "sky is blue", "water is blue", ...
David Sanders> Until AIS have robot bodies / companions, they rely on humans for natural resources. However, once the singularity hits, that probably won't matter anymore. It is not inconceivable to think of a time in 200-500 years there are no more humans, just AIS.
Humans would probably exist long after strong AI is created. Humans just would not be the most intelligent creatures anymore :-)
David Sanders> I disagree with AIS and natural selection. I think this will happen on its own by their very nature.
AIS-es can be influenced by natural selection as much as all other living organisms. But humans had millions of years of natural selection. When would AIS-es have that much?
David Sanders> AIS will be more open about self modification as you point out. AIS will be able to make other AIS and will soon learn how to evolve themselves very quickly.
"Evolving themselves" is part of artificial selection, not natural selection.
Monday, November 28, 2005
Matt Bamberger - Matt Bamberger
Matt Bamberger - Matt Bamberger
Matt worked for Microsoft, tried to retire ... unsuccessfully, so he works again and has extensive software development experience. Matt is interested in AGI (Artificial General Intelligence) and Singularity.
Matt worked for Microsoft, tried to retire ... unsuccessfully, so he works again and has extensive software development experience. Matt is interested in AGI (Artificial General Intelligence) and Singularity.
Wednesday, October 19, 2005
An Integrated Self-Aware Cognitive Architecture
That looks like a very interesting project in a Strong AI field.
Though I (Dennis) personally disagree with couple of basic ideas here.
1) It seems that Alexei Samsonovich pays a lot of attention to self-awareness.
For me it's not clear why self-awareness is more important than awareness about surrounding world in general.
2) Another questionable thing is about AI being autonomous.
As far as I know, there is no intelligent system which is autonomous from the society. Human's baby would never become intelligent without society.
In order to make AI system intelligent, Alexei Samsonovich would have to connect the system to society somehow. For example through the Internet.
Anyway, the following looks like great AI project.
You may want to try to take part in it.
From: Alexei V Samsonovich
Date: Tue, 18 Oct 2005 06:02:46 -0400
Subject: GRA positions available
Dear Colleague:
As a part of a research team at KIAS (GMU, Fairfax, VA), I am searching
for graduate students who are interested in working during one year,
starting immediately, on a very ambitious project supported by our
recently funded DARPA grant. The title is "An Integrated Self-Aware
Cognitive Architecture". The grant may be extended for the following
years. The objective is to create a self-aware, conscious entity in a
computer. This entity is expected to be capable of autonomous cognitive
growth, basic human-like behavior, and the key human abilities including
learning, imagery, social interactions and emotions. The agent should be
able to learn autonomously in a broad range of real-world paradigms.
During the first year, the official goal is to design the architecture,
but we are planning implementation experiments as well.
We are currently looking for several students. The available positions
must be filled as soon as possible, but no later than by the beginning
of the Spring 2006 semester. Specifically, we are looking for a student
to work on the symbolic part of the project and a student to work on the
neuromorphic part, as explained below.
A symbolic student must have a strong background in computer science,
plus a strong interest and an ambition toward creating a model of the
human mind. The task will be to design and to implement the core
architecture, while testing its conceptual framework on selected
practically interesting paradigms, and to integrate it with the
neuromorphic component. Specific background and experience in one of the
following areas is desirable: (1) cognitive architectures / intelligent
agent design; (2) computational linguistics / natural language
understanding; (3) hacking / phishing / network intrusion detection; (4)
advanced robotics / computer-human interface.
A neuromorphic candidate is expected to have a minimal background in one
of the following three fields. (1) Modern cognitive neuropsychology,
including, in particular, episodic and semantic memory, theory-of-mind,
the self and emotion studies, familiarity with functional neuroanatomy,
functional brain imaging data, cognitive-psychological models of memory
and attention. (2) Behavioral / system-level / computational
neuroscience. (3) Attractor neural network theory and computational
modeling. With a background in one of the fields, the student must be
willing to learn the other two fields, as the task will be to put them
together in a neuromorphic hybrid architecture design (that will also
include the symbolic core) and to map the result onto the human brain.
Not to mention that all candidates are expected to be interested in the
modern problem of consciousness, willing to learn new paradigms of
research, and committed to success of the team. Given the circumstances,
however, we do not expect all conditions listed above to be met. Our
minimal criterion is the excitement and the desire of an applicant to
build an artificial mind. I should add that this bold and seemingly
risky project provides a unique in the world opportunity to engage with
emergent, revolutionary activity that may change our lives.
Cordially,
Alexei Samsonovich
--
Alexei V Samsonovich, Ph.D.
George Mason University at Fairfax VA
703-993-4385 (o), 703-447-8032 (c)
Alexei V Samsonovich web site
Though I (Dennis) personally disagree with couple of basic ideas here.
1) It seems that Alexei Samsonovich pays a lot of attention to self-awareness.
For me it's not clear why self-awareness is more important than awareness about surrounding world in general.
2) Another questionable thing is about AI being autonomous.
As far as I know, there is no intelligent system which is autonomous from the society. Human's baby would never become intelligent without society.
In order to make AI system intelligent, Alexei Samsonovich would have to connect the system to society somehow. For example through the Internet.
Anyway, the following looks like great AI project.
You may want to try to take part in it.
From: Alexei V Samsonovich
samsonovich | @ | cox.net |
Date: Tue, 18 Oct 2005 06:02:46 -0400
Subject: GRA positions available
Dear Colleague:
As a part of a research team at KIAS (GMU, Fairfax, VA), I am searching
for graduate students who are interested in working during one year,
starting immediately, on a very ambitious project supported by our
recently funded DARPA grant. The title is "An Integrated Self-Aware
Cognitive Architecture". The grant may be extended for the following
years. The objective is to create a self-aware, conscious entity in a
computer. This entity is expected to be capable of autonomous cognitive
growth, basic human-like behavior, and the key human abilities including
learning, imagery, social interactions and emotions. The agent should be
able to learn autonomously in a broad range of real-world paradigms.
During the first year, the official goal is to design the architecture,
but we are planning implementation experiments as well.
We are currently looking for several students. The available positions
must be filled as soon as possible, but no later than by the beginning
of the Spring 2006 semester. Specifically, we are looking for a student
to work on the symbolic part of the project and a student to work on the
neuromorphic part, as explained below.
A symbolic student must have a strong background in computer science,
plus a strong interest and an ambition toward creating a model of the
human mind. The task will be to design and to implement the core
architecture, while testing its conceptual framework on selected
practically interesting paradigms, and to integrate it with the
neuromorphic component. Specific background and experience in one of the
following areas is desirable: (1) cognitive architectures / intelligent
agent design; (2) computational linguistics / natural language
understanding; (3) hacking / phishing / network intrusion detection; (4)
advanced robotics / computer-human interface.
A neuromorphic candidate is expected to have a minimal background in one
of the following three fields. (1) Modern cognitive neuropsychology,
including, in particular, episodic and semantic memory, theory-of-mind,
the self and emotion studies, familiarity with functional neuroanatomy,
functional brain imaging data, cognitive-psychological models of memory
and attention. (2) Behavioral / system-level / computational
neuroscience. (3) Attractor neural network theory and computational
modeling. With a background in one of the fields, the student must be
willing to learn the other two fields, as the task will be to put them
together in a neuromorphic hybrid architecture design (that will also
include the symbolic core) and to map the result onto the human brain.
Not to mention that all candidates are expected to be interested in the
modern problem of consciousness, willing to learn new paradigms of
research, and committed to success of the team. Given the circumstances,
however, we do not expect all conditions listed above to be met. Our
minimal criterion is the excitement and the desire of an applicant to
build an artificial mind. I should add that this bold and seemingly
risky project provides a unique in the world opportunity to engage with
emergent, revolutionary activity that may change our lives.
Cordially,
Alexei Samsonovich
--
Alexei V Samsonovich, Ph.D.
George Mason University at Fairfax VA
703-993-4385 (o), 703-447-8032 (c)
Alexei V Samsonovich web site
Thursday, September 22, 2005
Lies, Damned Lies, Statistics, and Probability of Abiogenesis Calculations
Abiogenesis - how the life self-formed.
Friday, August 12, 2005
Wired 13.08: The Birth of Google
Wired 13.08: The Birth of Google
It began with an argument. When he first met Larry Page in the summer of 1995, Sergey Brin was a second-year grad student in the computer science department at Stanford University.....
It began with an argument. When he first met Larry Page in the summer of 1995, Sergey Brin was a second-year grad student in the computer science department at Stanford University.....
Sunday, July 24, 2005
Supergoals
Anti-goals
I cannot find it now on your site, but, it seems your system has or will have the opposites to goals (was it goals with negative desirability?)Answer:In general, same supergoal works in both negative and positive directions.
Super goal can give both positive and negative reward to the same concept.
For example, supergoal "Want more money" could give negative reward to "Buy Google stock" concept, responsible for investment money into Google stock, because it caused money spending. One year later same "Want more money" supergoal may give positive reward to the same "Buy Google stock" concept, because this investment made the system richer.
Supergoal: "can act" or "state only"?
Supergoals can act. Supergoal actions are about modification of softcoded goals.Usually Supergoal has state. Typically supergoal state keeps information about supergoal satisfaction level is at this moment. Supergoal may be stateless too.
Thursday, July 21, 2005
Glue for the system
it seems to me, that you use cause-effect relations as a glue to put concepts together, so they form a connected knowledge; is it the only glue your system has?
Yes, correct: cause-effect relations are the only glue to put concepts together.
I decided to have one type of glue instead of many types of glue.
It's easier to work with one type of glue.
At the same time I have something else that you may
consider a glue for the whole system:
1) Desirability attributes (softcoded goals)- keep information about system's priorities.
2) Hardcoded units - connect concepts to the real world. Super goals are the special subset of these hardcoded units.
Yes, correct: cause-effect relations are the only glue to put concepts together.
I decided to have one type of glue instead of many types of glue.
It's easier to work with one type of glue.
At the same time I have something else that you may
consider a glue for the whole system:
1) Desirability attributes (softcoded goals)- keep information about system's priorities.
2) Hardcoded units - connect concepts to the real world. Super goals are the special subset of these hardcoded units.
Monday, July 18, 2005
What AI ideas has Google introduced?
Google not introduced, but practically demonstrated the following ideas:1) Words are the smallest units of intelligent information. Word alone has meaning. Letter alone - doesn't. Google searches for words as a whole. Not for letters of substrings.
2) Phrases are important units of information too. Google underlines importance of phrases by supporting search in quotes, like "test phrase".
3) Natural language (plain text) is the best way to share knowledge between intelligent systems (people and computers).
4) Programming languages that are the best for mainstream programming - the same languages are the best for intelligent system development. LISP, Prolog, and other artificial programming languages are less efficient in intelligence development than mainstream languages like C/C++/C#/VB/: (Google proved this idea by using plain C as a core language for "advanced text manipulation project".
5) Huge knowledge base does matter for intelligence. Google underlines importance of huge knowledge base.
6) Simplicity of knowledge base structure does matter. In comparison with CYC's model, Google's model is relatively simple. Obviously Google is more efficient/intelligent than dead CYC.
7) Intelligent system must collect data automatically (by itself, like in Google's crawler). Intelligent system should not expect to be manually fed by developers (like in CYC).
8) To improve information quality, intelligent system should collect information from different types of sources. Google collects web pages from web, but also it collects information from Google toolbar - about what web pages are popular among users.
9) Constant updates and forgetting keeps intelligent system sane (Google constantly crawls the Web, adds new and deletes dead web-pages from its memory).
10) Links (relations) add intelligence to a knowledge base (Search engines made the Web mode intelligent);
Good links convert knowledge base into intelligent system (Google's index with web work as a very wise adviser (read: intelligent system)).
11) Links must have weights (like in Google's Page rank). These weights must be taken into consideration in decision making.
12) Couple of talented researchers can do far more than lots of money in wrong hands. Think about "'Serge Brin & Larry Page search' vs 'Microsoft's search'".
13) Sharing ideas with public helps research project to come to production. Hiding ideas - kills the project in the cradle. Google is very open about its technology. And very successful.
14) Targeting practical results helps research project a lot. Instead of having "abstract research about search", Google targeted "advanced web-search". Criteria of success of the project were clearly defined. As a result Google project quickly hit production and generated tremendous outcome in many ways.
Sunday, July 17, 2005
How does strong AI schedule super goals?
Strong AI doesn't schedule super goals directly. Instead strong AI schedules softcoded goals. To be more exact, super goals schedule softcoded goals by making them more/less desirable (see Reward distribution routine). The more desirable softcoded goal is – the higher probability is that this softcoded goal will be activated and executed.How strong AI finds a way to satisfy super goal
The idea is simple: whatever satisfies super goal now -- most probably would satisfy the super goal in the future. In order to apply this idea, super goals must be programmed in a certain way. Every super goal itself must be able to distinguish what is good and what is bad.
Such approach makes super goal kind of "advanced sensor".
Actually not only "advanced sensor", but also "desire enforcer".
Here's the example how it works:
Super goal’s objective: to be rich.
Super goal sensor implementation: check strong AI’s bank account for amount of money on it.
Super goal enforce mechanism: mark every concept which causes increasing the bank account balance as "desirable". Mark every concept which causes decreasing the bank account balance as "not-desirable".
Note: "mark concept as desirable/undesirable" doesn't really work in "black & white" mode. Subtle super goal enforcement mechanism either increases or decreases desirability of every cause concept affecting the bank account balance.
Concept type
Your concepts have types: word, phrase, simple concept and periheral device. What is a logic behind having these types?
In fact "peripheral device" is not just one type. There could be many peripheral devices.
Peripheral device is a subset of hardcoded units
Concept can be of any hardcoded unit type.
Moreover, one hardcoded unit can be related to concepts of several types.
For example: text parser has direct relations with concept-words and concept-phrases. (Please don't confuse these "direct relations" with relations in the main memory).
Ok, now we see that strong AI has many concept types. How many? As many as AI software developer code in hardcoded units. 5-10 concept types is a good start for strong AI prototype. 100 concept types is probably good number for real life strong AI. 1000 concept types is probably too many.
So, what is a "concept type"? Concept type is just a reference from concept to hardcoded unit. Concept type is a reference from concept to real world through a hardcoded unit.
What concept types shold be added to strong AI?
If AI developer feels that concept type XYZ is useful for strong AI...
and if the AI developer can code this XYZ concept type in hardcoded unit...
and if this functionality is not implemented in other hardcoded unit yet...
and the main memory structure doesn't have to be modified to accomodate this new concept type...
then the developer may add this XYZ concept type to strong AI.
What concept types should not be added?
- I feel that such concept types as "verb" and "noun" should not be added, because there is no clear algorithm to distinguish between verbs and nouns.
- I feel that "property concept type" should not be used, because "property concept type" is already covered by "cause-effect relationships" and because implementation of property type concepts will make main memory structure more complex.
In fact "peripheral device" is not just one type. There could be many peripheral devices.
Peripheral device is a subset of hardcoded units
Concept can be of any hardcoded unit type.
Moreover, one hardcoded unit can be related to concepts of several types.
For example: text parser has direct relations with concept-words and concept-phrases. (Please don't confuse these "direct relations" with relations in the main memory).
Ok, now we see that strong AI has many concept types. How many? As many as AI software developer code in hardcoded units. 5-10 concept types is a good start for strong AI prototype. 100 concept types is probably good number for real life strong AI. 1000 concept types is probably too many.
So, what is a "concept type"? Concept type is just a reference from concept to hardcoded unit. Concept type is a reference from concept to real world through a hardcoded unit.
What concept types shold be added to strong AI?
If AI developer feels that concept type XYZ is useful for strong AI...
and if the AI developer can code this XYZ concept type in hardcoded unit...
and if this functionality is not implemented in other hardcoded unit yet...
and the main memory structure doesn't have to be modified to accomodate this new concept type...
then the developer may add this XYZ concept type to strong AI.
What concept types should not be added?
- I feel that such concept types as "verb" and "noun" should not be added, because there is no clear algorithm to distinguish between verbs and nouns.
- I feel that "property concept type" should not be used, because "property concept type" is already covered by "cause-effect relationships" and because implementation of property type concepts will make main memory structure more complex.
How naked is a concept?
There is a concept ID, which you use when referring to some concept. When coding, everyone will have these IDs, the question is how "naked" they are, i.e. how they are related to objective reality.
Concept alone is very naked. Concept ID is a core of a concept.
Concept is related to objective realitythrough relations to other concepts.
Some concepts related to objective reality through special devices.
Example of such device could be text parser.
Example of connection between concept and objective reality: temperature sensor connected to temperature sensor concept.
Concept alone is very naked. Concept ID is a core of a concept.
Concept is related to objective realitythrough relations to other concepts.
Some concepts related to objective reality through special devices.
Example of such device could be text parser.
Example of connection between concept and objective reality: temperature sensor connected to temperature sensor concept.
Saturday, July 16, 2005
What learning algorithms does your AI system use?
Strong AI learns in two ways:1. Experiment.
2. Knowledge download.
See also: Learning.
What do you use to represent information inside of the system?
From "information representation" point of view there are two types of information:1) Main information - information about anything in the real world.
2) Auxiliary information - information which helps to connect main information with the real world.
Examples of auxiliary information: words, phrases, email contacts, URLs, ...
How main information is represented
Basically main information is represented in form of concepts and relations between concepts.From developer's perspective all concepts are stored in Concept
table. All relations are stored in the Relation table.
Auxiliary information representation
In order to connect main information to the real world AI needs some additional information. Like human brain's cortex cannot read, hear, speak, or write --- the same way main memory cannot directly be connected to the real world.So, AI needs some peripheral devices. And this devices needs to store some internal information for itself. I name all this information for peripheral devices: "auxiliary information".
Auxiliary information is stored in the tables designed by AI developer. These tables are designed on the case-by-case basis. Architecture of a peripheral module is taken into consideration.
For example, words are kept in WordDictionary table, phrases are kept in PhraseDictionary table.
As I said: auxiliary information connects main information with the real world.
Example of such connection:
Abstract concept of "animal" can relate to concepts "cat", "tiger", and "rabbit". Concept "tiger" can be stored in the word dictionary.
In addition to that Auxiliary information may or may not be duplicated as main information.
Text parser may read word "tiger", find it in the word dictionary: Then AI may meditate on the "tiger" concept and give back some thoughts to the real world.
Monday, May 30, 2005
AI tools
Internal and external tools
Internal tools
Internal tools are such tools which are integrated into AI by AI developer.
Example of human's analogue would be a hand + motion part of the brain, which human has since birthday. Another example: eyes + vision center of the brain --- this vision tool is also integrated into human's brain before the brain starts to work.
External tools
External tools are such tools which are integrated into AI by AI itself. AI learns from its own experience or from external knowledge about the tool, then practice to use the tool, and then use it.
Example of human's analogue here would be an axe. Another example could be calculator.
Indistinct boundaries between Internal and External tools
How would you classify "heart pacemaker"? Without this tool some people cannot live. Also human doesn't have to learn about use heart pacemaker. At the same time humans don't get "heart pacemaker" with their body. Is it external or internal tool for humans?
In case of AI intermingling between internal and external tools is even deeper, because AI is pretty flexible.
For example, AI can learn about advanced math tool from an article in magazine, and then integrate itself with this tool. Such integration can be very tight since computers have very extendable architecture (in comparison with humans). So, "external tool" can become "internal tool".
Internal tools
Importance of internal tools
Internal tools are very important for AI because mind cannot communicate with the world without tools. External tools are unavailable for a mind without internal tools.
Internal tools integration with AI
Internal tools are connected with the mind through a set of neurons. This set of neurons is associated with the tool. When the set is active - the tool is active. When the tool is active then set of neurons is active.
Example:
Let consider internal tool integration on example of "chat client program" (like ICQ, MSN, or Yahoo messenger).
"Chat client program" is represented in the main memory by neuron nChatClientProgram.
If AI decided to chat then AI activates nChatClientProgram neuron. That activates "chat client program" (the tool). The tool reads active memory concepts, converts them into text and sends text message over internet. After that the tool activates neuron nChatClientProgramAnswerWaitMode in the main memory.
When the tool gets response from Internet, then the tool:
- Parses incoming text and put received concepts into the short memory.
- Activates neuron nChatClientProgramAnswerReceived.
Activation of nChatClientProgramAnswerReceived causes execution of softcoded routine associated with nChatClientProgramAnswerReceived neuron.
After execution, the results are evaluated against AI's super goals. AI learns from the experience, in particular:
1. Desirability of nChatClientProgram, nChatClientProgramAnswerWaitMode, nChatClientProgramAnswerReceived, and other related neurons are evaluated (see Reward distribution routine). Successful chatting experience would increase desirability of nChatClientProgram neuron and therefore probability of "Chat client program" use in the future. Unsuccessful experience would reduce probability of such use.
2. Softcoded routines are evaluated and modified. Modified routine can be applied to process results of the next incoming message.
List of internal tools to develop for strong AI
1. Timer. It's good to have internal sense of time.
2. Google search - helps to understand new concepts.
3. Chat with operator.
4. Internet chat client.
External tools
Importance of external tools
External tools are important because:
1) There could be millions of external tools.
2) AI can use already developed humans' tools.
3) External tools can be converted into internal tools and gain all advantages of internal tools.
External tools integration with AI
External tools are connected with the mind through internal tools.
Example:
Internal tool: web browser.
External tool: stock exchange web site.
Through internal tool AI can use external tool.
Story of my interest in AI
Jiry> When did you first decide to attempt making Strong AI?
Jiry> Was there anything particular what triggered that decision?
I'd say it was ~year 2001.
It wasn't sudden decision.
I was interested in AI among many other things.
Gradually I recognized how powerful could such tool be.
Also I decided that since computers are getting more and more
powerful, AI should be implemented pretty soon.
Originally I didn't think that I should develop AI, I just thought
that I'll be among early adopters of AI, that I will just tweak it after someone
(probably Microsoft) would develop AI framework.
Gradually I understood that I have to build AI by myself, because:
1) practically all other researchers go in wrong directions.
2) I learned about approaches which should give successful results and
put approximate AI model together.
Jiry> Was there anything particular what triggered that decision?
I'd say it was ~year 2001.
It wasn't sudden decision.
I was interested in AI among many other things.
Gradually I recognized how powerful could such tool be.
Also I decided that since computers are getting more and more
powerful, AI should be implemented pretty soon.
Originally I didn't think that I should develop AI, I just thought
that I'll be among early adopters of AI, that I will just tweak it after someone
(probably Microsoft) would develop AI framework.
Gradually I understood that I have to build AI by myself, because:
1) practically all other researchers go in wrong directions.
2) I learned about approaches which should give successful results and
put approximate AI model together.
Monday, May 23, 2005
AI operator
What are the responsibilities of AI's operator?
AI developer can define some default values for parameters like:
- how quickly should AI system forget new information.
- what weight increment should be applyed to relation between two concepts which were read near each other.
- ...
AI will be able to work with these default values, but in order to achieve optimal performance, AI operator has to tweak these default values.
Operator will observe and analyze how AI performs, modify default values, and see for improvements in AI's mental abilities.
"AI's operator" is not the person who talks with AI all the time.
"AI's operator" almost doesn't talk with AI.
"AI's operator" observes how AI's mental process works. Also "AI's operator" "tunes/tweaks" AI's mind.
See also:
AI's operator
AI developer can define some default values for parameters like:
- how quickly should AI system forget new information.
- what weight increment should be applyed to relation between two concepts which were read near each other.
- ...
AI will be able to work with these default values, but in order to achieve optimal performance, AI operator has to tweak these default values.
Operator will observe and analyze how AI performs, modify default values, and see for improvements in AI's mental abilities.
"AI's operator" is not the person who talks with AI all the time.
"AI's operator" almost doesn't talk with AI.
"AI's operator" observes how AI's mental process works. Also "AI's operator" "tunes/tweaks" AI's mind.
See also:
AI's operator
AI answering comlex questions
> Imagine that the example talks about 2 accounts, initial amount $100
> on both and several simple financial transactions between the
> accounts. I believe your AI would get confused very soon and would not
> be able to figure out the balance.
In the situation of such complexity regular human beings cannot provide adequate answer.
What do you expect from AI under development?
If we are talking about perfect AI now, then again --- AI will not read text with "one-time parsing" approach.
Instead, perfect AI will read like human: read sentence, think, make decision whether to read father, or re-read again, or skip reading at all, or use another source of information (e.g. ask questions or go to Google), or do anything else. Perfect AI would accomplish chosen action until AI would be satisfied with the results.
But let's return back to today's reality: we are talking about developing first AI prototype, so we'd better skip too complex tasks for now.
> on both and several simple financial transactions between the
> accounts. I believe your AI would get confused very soon and would not
> be able to figure out the balance.
In the situation of such complexity regular human beings cannot provide adequate answer.
What do you expect from AI under development?
If we are talking about perfect AI now, then again --- AI will not read text with "one-time parsing" approach.
Instead, perfect AI will read like human: read sentence, think, make decision whether to read father, or re-read again, or skip reading at all, or use another source of information (e.g. ask questions or go to Google), or do anything else. Perfect AI would accomplish chosen action until AI would be satisfied with the results.
But let's return back to today's reality: we are talking about developing first AI prototype, so we'd better skip too complex tasks for now.
Friday, May 20, 2005
How to translate text from one language to another
Language translator prototype
0) Originally we have a sentence in a source language and we want to translate it into a destination language.
1) Take "source language" sentence.
2) Find all text concepts (words and phrases) in the source sentence.
3) All these text concepts constitute "source language text thought".
4) Search for all concepts which are related to the source language text thought.
5) As a result, we'll get set of concepts which conctiture abstract thought.
6) Now it's time to search for related text thought in destination language.
7) So, we search all concepts which simultaneously:
a) Relate to this abstract thought.
b) Relate to the concept which represents destination language.
8) At this point we have all concepts related to the original text and to the destination language. This is "destination language text thought".
9) Now we can eaily convert this "destination language text thought" into "destination language text".
Strong AI can build the final sentence (by using a word dictionary, a phrase dictionary, and text pairs dictionary).
See also:
Text synthesizer.
(Originally written: Sep 2004).
0) Originally we have a sentence in a source language and we want to translate it into a destination language.
1) Take "source language" sentence.
2) Find all text concepts (words and phrases) in the source sentence.
3) All these text concepts constitute "source language text thought".
4) Search for all concepts which are related to the source language text thought.
5) As a result, we'll get set of concepts which conctiture abstract thought.
6) Now it's time to search for related text thought in destination language.
7) So, we search all concepts which simultaneously:
a) Relate to this abstract thought.
b) Relate to the concept which represents destination language.
8) At this point we have all concepts related to the original text and to the destination language. This is "destination language text thought".
9) Now we can eaily convert this "destination language text thought" into "destination language text".
Strong AI can build the final sentence (by using a word dictionary, a phrase dictionary, and text pairs dictionary).
See also:
Text synthesizer.
(Originally written: Sep 2004).
Friday, April 08, 2005
Mistakes and general intelligence
"People make stupid mistakes. A well designed AI should not."
Jiri Jelinek
Human beings make mistakes because their minds make approximate decisions.
Human beings have general intelligence because their minds able to make approximate decisions.
If you develop AI without this critical feature (approximate decision making) then such AI wouldn't have general intelligence...
Jiri Jelinek
Human beings make mistakes because their minds make approximate decisions.
Human beings have general intelligence because their minds able to make approximate decisions.
If you develop AI without this critical feature (approximate decision making) then such AI wouldn't have general intelligence...
Flawless AI
In order to make decisions without mistakes you need 3 things:
1) Appropriate "perfect problem solver" algorithm.
2) Full information about our word.
3) Endless computational power.
Even if #1 is theoretically possible, #2 and #3 are impossible even in theory.
1) Appropriate "perfect problem solver" algorithm.
2) Full information about our word.
3) Endless computational power.
Even if #1 is theoretically possible, #2 and #3 are impossible even in theory.
Thursday, April 07, 2005
Abstract concept
Abstract concept is a concept which is not directly connected to system's receptors.
Abstract concept is connected with other concepts though. Abstract concept is connected to receptors indirectly through non-abstract concepts (surface concepts).
It's not easy task to identify and create an abstract concept. You cannot just borrow it from external world as surface concepts.
What do you think: is it good idea to name such abstact concept as Deep Concept?
It may help to distinguish abstract concepts which are available in books from abstract concepts which must be created by AI itself.
Abstract concept is connected with other concepts though. Abstract concept is connected to receptors indirectly through non-abstract concepts (surface concepts).
It's not easy task to identify and create an abstract concept. You cannot just borrow it from external world as surface concepts.
What do you think: is it good idea to name such abstact concept as Deep Concept?
It may help to distinguish abstract concepts which are available in books from abstract concepts which must be created by AI itself.
Thursday, March 17, 2005
Limited AI, weak AI, strong AI
Jiri,
> your AI reminds me of an old Czech fairy-tale where a dog and cat
> wanted to bake a really tasty cake ;-9, so they mixed all kinds of
> food they liked
> to eat and baked it.. Of course the result wasn't quite what they expected >;-).
That's not the case.
:-)
I know a lot of stuff and I carefully selected features for strong AI.
I rejected far more features than I included.
And I didn't it because I thought that these rejected features are useless in true AI, in spite that these rejected features are useful for weak AI.
> I think you should start to play with something a bit less challenging
> what would help you to see the problem with your AI.
Totally agree.
As I said --- I'm working on limited AI. Which is simultaneously:
1) Weak AI.
2) Few steps toward strong AI.
There are many weak AI applications. Some of weak AIs are steps toward strong AI, most of weak AIs don't contribute almost anything to strong AI.
That's why I need to choose limited AI functionality carefully.
Your suggestion below may become a good example of such limited AI. With proper system structure.
But probably I wouldn't work on it in the nearest future because it doesn't have much business sense.
======= Jiri's idea =======
How about developing a story generator. User would say something like:
I want an n-pages long story about [a topic], genre [a genre].
Then you could use google etc (to save some coding) and try to
generate a story by connecting some often connected strings.
Users could provide the first sentence or two as an initial story trigger.
I do not think you would generate a regular 5 page story when using
just your statistical approach. I think it would be pretty odd
mix of strings with pointless storyline = something far from the
quality of an average man-made story.
===========================
> your AI reminds me of an old Czech fairy-tale where a dog and cat
> wanted to bake a really tasty cake ;-9, so they mixed all kinds of
> food they liked
> to eat and baked it.. Of course the result wasn't quite what they expected >;-).
That's not the case.
:-)
I know a lot of stuff and I carefully selected features for strong AI.
I rejected far more features than I included.
And I didn't it because I thought that these rejected features are useless in true AI, in spite that these rejected features are useful for weak AI.
> I think you should start to play with something a bit less challenging
> what would help you to see the problem with your AI.
Totally agree.
As I said --- I'm working on limited AI. Which is simultaneously:
1) Weak AI.
2) Few steps toward strong AI.
There are many weak AI applications. Some of weak AIs are steps toward strong AI, most of weak AIs don't contribute almost anything to strong AI.
That's why I need to choose limited AI functionality carefully.
Your suggestion below may become a good example of such limited AI. With proper system structure.
But probably I wouldn't work on it in the nearest future because it doesn't have much business sense.
======= Jiri's idea =======
How about developing a story generator. User would say something like:
I want an n-pages long story about [a topic], genre [a genre].
Then you could use google etc (to save some coding) and try to
generate a story by connecting some often connected strings.
Users could provide the first sentence or two as an initial story trigger.
I do not think you would generate a regular 5 page story when using
just your statistical approach. I think it would be pretty odd
mix of strings with pointless storyline = something far from the
quality of an average man-made story.
===========================
Sunday, March 13, 2005
Lojban vs programming languages vs natural language
Ben, this idea is wrong:
-----
Lojban is far more similar to natural languages in both intent, semantics and syntax than to any of the programming languages.
-----
Actually Lojban is closer to programming languages than to natural languages.
Structure of Lojban and programming languages is predefined.
Structure of natural languages is not predefined. Structure of a natural language is defined by examples of using this natural language. This is the key difference between Lojban and Natural Language.
Since structure of natural language is not predefined, you cannot put language structure into NL parser code. Instead you need to implement system which will learn rules of natural language from massive amount of examples in this natural language.
You are trying to code natural language rules in text parser, aren’t you?
That’s why you theoretically can parse Lojban and programming languages, but you cannot properly parse any natural language even theoretically.
If you want properly parse natural language, you need predefine as little rules as possible.
I think that natural language parser has to be able to recognize words and phrases.
That's all that NL text parser has to be able to do.
All other mechanisms of natural language understanding should be implemented outside the text parser itself.
These mechanisms are:
- Word dictionary and phrase dictionary (too serve as a link between natural language (words, phrases) and internal memory (concepts).
- Relations between concepts and mechanisms which keep these relations up to date.
-----
Lojban is far more similar to natural languages in both intent, semantics and syntax than to any of the programming languages.
-----
Actually Lojban is closer to programming languages than to natural languages.
Structure of Lojban and programming languages is predefined.
Structure of natural languages is not predefined. Structure of a natural language is defined by examples of using this natural language. This is the key difference between Lojban and Natural Language.
Since structure of natural language is not predefined, you cannot put language structure into NL parser code. Instead you need to implement system which will learn rules of natural language from massive amount of examples in this natural language.
You are trying to code natural language rules in text parser, aren’t you?
That’s why you theoretically can parse Lojban and programming languages, but you cannot properly parse any natural language even theoretically.
If you want properly parse natural language, you need predefine as little rules as possible.
I think that natural language parser has to be able to recognize words and phrases.
That's all that NL text parser has to be able to do.
All other mechanisms of natural language understanding should be implemented outside the text parser itself.
These mechanisms are:
- Word dictionary and phrase dictionary (too serve as a link between natural language (words, phrases) and internal memory (concepts).
- Relations between concepts and mechanisms which keep these relations up to date.
Lojban
Ben,
I think that it's a mistake to teach AI to any language other than
natural language.
Lojban is not a natural language for sure (because it wasn't really
tested for variety of real life communication purposes).
The reasons why strong AI has to be taught to a natural language, not to Lojban:
1) If AI understands natural language (NL) then it's a good sign that
the core AI design is correct and quite close to optimal.
If AI cannot learn NL then it's a sign that core AI design is wrong.
If AI can learn Lojban --- it proves nothing from strong AI standpoint.
There are a lot of VB, Pascal, C#, C++ compilers already. So what?
2) NL understanding has immediate practical sense.
Understanding of Jojban has no practical sense.
3) NL text base is huge.
Lojban language text base is tiny.
4) Society is "the must" component of intelligence.
Huge amount of people speaks/write/read NL.
Almost nobody speaks Lojban.
Bottom line:
If you spend time/money on design/teaching AI to understand Lojban ---
it would be just a waste of your resources. It has neither strategical nor tactical use.
I think that it's a mistake to teach AI to any language other than
natural language.
Lojban is not a natural language for sure (because it wasn't really
tested for variety of real life communication purposes).
The reasons why strong AI has to be taught to a natural language, not to Lojban:
1) If AI understands natural language (NL) then it's a good sign that
the core AI design is correct and quite close to optimal.
If AI cannot learn NL then it's a sign that core AI design is wrong.
If AI can learn Lojban --- it proves nothing from strong AI standpoint.
There are a lot of VB, Pascal, C#, C++ compilers already. So what?
2) NL understanding has immediate practical sense.
Understanding of Jojban has no practical sense.
3) NL text base is huge.
Lojban language text base is tiny.
4) Society is "the must" component of intelligence.
Huge amount of people speaks/write/read NL.
Almost nobody speaks Lojban.
Bottom line:
If you spend time/money on design/teaching AI to understand Lojban ---
it would be just a waste of your resources. It has neither strategical nor tactical use.
Friday, March 11, 2005
Logic
Jiry, you misunderstand what the Logic is about.
Logic is not something 100% correct. Logic is a process of building conclusion based on highly probable information (facts and relations between these facts).
Under "highly probable" I mean over 90% probability.
Since Logic does not operates 100% correct information, logic generates both correct and incorrect answers. In order to find out if logical conclusion is correct we need to test it. That's why experiment is necessary before we can rely on logic conclusion.
Let's consider an example of logic process:
A) Mary goes to the church.
B) People who go to church believe in God.
C) Mary believes in God
D) People who believe in God believe in life after death.
E) Mary believes in life after death.
Let's try to understand how reliable this logic conclusion could be.
Let assume that every step has 95% probability.
Then total probability would be 0.95 * 0.95 * 0.95 * 0.95 * 0.95 = 0.77 = 77%
Actually:
1) We may have wrong knowledge that Mary goes to the church (we could confuse Mary with someone else, or Mary might stop going to the church).
2) Not all people who go to church believe in God
3) We could make logical mistake assuming that (A & B) result in C.
4) Not all people who believe in God believe in life after death.
5) We could make logical mistake assuming that (C & D) result in E.
Conclusion #1:
Since logic is not reliable, long logical conclusions are typically could be less probable than even non-reliable observations.
For instance, if Mary’s husband and Mary’s mother mentioned that Mary doesn’t believe in life after death then we’d better rely on their words more than on our 5 step logical conclusion.
Conclusion #2:
Since multi-step logic is unreliable --- multi-step logic is not "the must" component of intelligence. Therefore logic implementation could be skipped in the first strong AI prototypes.
Limited AI can function very well without multi-step logic.
Logic is not something 100% correct. Logic is a process of building conclusion based on highly probable information (facts and relations between these facts).
Under "highly probable" I mean over 90% probability.
Since Logic does not operates 100% correct information, logic generates both correct and incorrect answers. In order to find out if logical conclusion is correct we need to test it. That's why experiment is necessary before we can rely on logic conclusion.
Let's consider an example of logic process:
A) Mary goes to the church.
B) People who go to church believe in God.
C) Mary believes in God
D) People who believe in God believe in life after death.
E) Mary believes in life after death.
Let's try to understand how reliable this logic conclusion could be.
Let assume that every step has 95% probability.
Then total probability would be 0.95 * 0.95 * 0.95 * 0.95 * 0.95 = 0.77 = 77%
Actually:
1) We may have wrong knowledge that Mary goes to the church (we could confuse Mary with someone else, or Mary might stop going to the church).
2) Not all people who go to church believe in God
3) We could make logical mistake assuming that (A & B) result in C.
4) Not all people who believe in God believe in life after death.
5) We could make logical mistake assuming that (C & D) result in E.
Conclusion #1:
Since logic is not reliable, long logical conclusions are typically could be less probable than even non-reliable observations.
For instance, if Mary’s husband and Mary’s mother mentioned that Mary doesn’t believe in life after death then we’d better rely on their words more than on our 5 step logical conclusion.
Conclusion #2:
Since multi-step logic is unreliable --- multi-step logic is not "the must" component of intelligence. Therefore logic implementation could be skipped in the first strong AI prototypes.
Limited AI can function very well without multi-step logic.
Friday, March 04, 2005
Background knowledge --- how much data do we need?
Jiry> And try to understand that when testing AI (by letting it to solve
Jiry> particular problem(s)), you do not need the huge amount of data you
Jiry> keep talking about. Let's say the relevant stuff takes 10 KB (and it
Jiry> can take MUCH less in many cases). You can provide 100 KB of data
Jiry> (including the relevant stuff) and you can perform lots of testing.
Jiry> The solution may be even included in the question (like "What's the
Jiry> speed of a car which is moving 50 miles per hour?"). There is
Jiry> absolutely no excuse for a strong AI to miss the right answer in those
Jiry> cases.
Do you mean 100 KB data as the background knowledge is enough for strong AI?
Are you kidding?
By the age of 1 year human baby parsed at least terabytes of information. And keeps in his/her memory at least many megabytes of information.
Do you think 1 year old human baby has strong AI with all this knowledge?
Yes, artificial intelligence could have advantage over natural intelligence. AI can be intelligent with less amount of info.
But not with 100 KB anyway.
100 KB is almost nothing for General Intelligence.
Jiry> particular problem(s)), you do not need the huge amount of data you
Jiry> keep talking about. Let's say the relevant stuff takes 10 KB (and it
Jiry> can take MUCH less in many cases). You can provide 100 KB of data
Jiry> (including the relevant stuff) and you can perform lots of testing.
Jiry> The solution may be even included in the question (like "What's the
Jiry> speed of a car which is moving 50 miles per hour?"). There is
Jiry> absolutely no excuse for a strong AI to miss the right answer in those
Jiry> cases.
Do you mean 100 KB data as the background knowledge is enough for strong AI?
Are you kidding?
By the age of 1 year human baby parsed at least terabytes of information. And keeps in his/her memory at least many megabytes of information.
Do you think 1 year old human baby has strong AI with all this knowledge?
Yes, artificial intelligence could have advantage over natural intelligence. AI can be intelligent with less amount of info.
But not with 100 KB anyway.
100 KB is almost nothing for General Intelligence.
From Limited AI to Strong AI
Jiri> OK, you have a bunch of pages which appear to be relevant.
Jiri> What's the next step towards your strong AI?
Next steps would be:
1) Implementation of hardcoded goals
2) Implementation of experiment feature.
3) Natural Text writing.
4) ...
Jiri> What's the next step towards your strong AI?
Next steps would be:
1) Implementation of hardcoded goals
2) Implementation of experiment feature.
3) Natural Text writing.
4) ...
How many types of relations should strong AI support?
Dennis>> why 4 types of relations are better than one type of relations?
Jiri> Because it better supports human-like thinking. Our mind is working
Jiri> with multiple types of relations on the level where reasoning applies.
Our mind is working with far more than 4 types of relations.
That's why it's not good idea to implement 4 types of relations. In one hand it's too complex. In another hand it's still not enough.
Better approach would be to use one relation which is able to represent all other types of relations.
Jiri> Because it better supports human-like thinking. Our mind is working
Jiri> with multiple types of relations on the level where reasoning applies.
Our mind is working with far more than 4 types of relations.
That's why it's not good idea to implement 4 types of relations. In one hand it's too complex. In another hand it's still not enough.
Better approach would be to use one relation which is able to represent all other types of relations.
Thursday, March 03, 2005
Learning common sense from simple Natural Text parsing
Jiri,
>> 1) Could you please give me an example of two words which are used near
>> each other, but do not have cause-effect relations?
> I'll give you 6. I'm in a metro train right now and there is a big
> message right in front of me, saying: "PLEASE DO NOT LEAN ON DOORS"
> What cause(s) and effect(s) do you see within that statement?
Let imagine that strong AI is in reasoning process.
But in order to make general reasoning AI needs to have background knowledge (common sense). That's what CyCorp is trying to achieve.
Now let's consider what kind of background knowledge can be extracted from statement "PLEASE DO NOT LEAN ON DOORS".
(Obviously this knowledge extraction should be made not in the actual decision making time, because huge amount of text should be parsed and our test statement is just one of many millions statement).
Ok, what we know from the test statement:
- If you think about "lean" - think about "doors" as one of the options.
- If you think about "door" - think about "lean" as one of the options.
- If you say "do not" - think about saying "please" to.
- If you say "do" - think about saying "please" to.
- "Doors" is a possible cause for "Not lean"
- "Doors" is a possible cause for "lean"
- You "Lean" "on" something.
- If you think about "on" - think about "doors" as one of the options.
You can extract more useful information from this sentence.
Even "Please" -> "Doors" and "Doors" -> "Please" have some sense. Not much though. :-)
Statistical approach would help to find what relations are more important than other.
Do you see my point now?
When it's time to make actual decision, AI would have some common sense database which will provide large, but not endless amount of choices to consider.
All these choices would be pre-rated. That would help to prioritize consideration of these choices.
Now let's consider if structure of the main memory should be adjusted in order to transform "Limited AI to Strong AI.
I don't see any reason to change memory structure in order to make such transition.
Additional mechanisms of updating cause-effect relations would be introduced such as experiment, advanced reading, and "thought experiment". But all these new mechanisms would still use the same main memory.
>> 1) Could you please give me an example of two words which are used near
>> each other, but do not have cause-effect relations?
> I'll give you 6. I'm in a metro train right now and there is a big
> message right in front of me, saying: "PLEASE DO NOT LEAN ON DOORS"
> What cause(s) and effect(s) do you see within that statement?
Let imagine that strong AI is in reasoning process.
But in order to make general reasoning AI needs to have background knowledge (common sense). That's what CyCorp is trying to achieve.
Now let's consider what kind of background knowledge can be extracted from statement "PLEASE DO NOT LEAN ON DOORS".
(Obviously this knowledge extraction should be made not in the actual decision making time, because huge amount of text should be parsed and our test statement is just one of many millions statement).
Ok, what we know from the test statement:
- If you think about "lean" - think about "doors" as one of the options.
- If you think about "door" - think about "lean" as one of the options.
- If you say "do not" - think about saying "please" to.
- If you say "do" - think about saying "please" to.
- "Doors" is a possible cause for "Not lean"
- "Doors" is a possible cause for "lean"
- You "Lean" "on" something.
- If you think about "on" - think about "doors" as one of the options.
You can extract more useful information from this sentence.
Even "Please" -> "Doors" and "Doors" -> "Please" have some sense. Not much though. :-)
Statistical approach would help to find what relations are more important than other.
Do you see my point now?
When it's time to make actual decision, AI would have some common sense database which will provide large, but not endless amount of choices to consider.
All these choices would be pre-rated. That would help to prioritize consideration of these choices.
Now let's consider if structure of the main memory should be adjusted in order to transform "Limited AI to Strong AI.
I don't see any reason to change memory structure in order to make such transition.
Additional mechanisms of updating cause-effect relations would be introduced such as experiment, advanced reading, and "thought experiment". But all these new mechanisms would still use the same main memory.
Tuesday, March 01, 2005
Simple AI as a necessary prototype for complex AI
Jiri,
1) Goals defined by operator are even more dangerous.
2) You can load data from CYC, it this data wouldn't become knowledge. Therefore it wouldn't be learning. And wouldn't be useful.
Goals are still necessary to learn. Only the goals give sense to learning.
3) Why would long question cause "no answer found" result? Quite contrary --- the longer the question, the more links to possible answers could be found.
4)
>> Bottom line: "Generalization is not core AI feature".
> It's not a must for AI, but it's a pretty important feature.
> It's a must for Strong AI. AI is very limited without that.
- I have ideas about how to implement generalization feature.
Would you like to discuss these ideas?
- I think that it's not a good idea to implement generalization in the first AI prototype.
Do you think that generalization should be implemented in the first AI prototype?
5)
> "Ability to logically explain the logic" is just useful for invalid-idea
> debugging.
> So I recommend to (plan to) support the feature.
All features are useful. The problem is --- when we put too many features into software project --- it's just dies.
That's why it's important to correctly prioritize the features.
Do you think that logic should be implemented in the first AI prototype?
50 years of trying to put logic into the first AI prototype proved that it's not very good idea.
6) Reasoning tracking
> It's much easier to track "reasons for all the (sub)decisions"
> for OO-based AI.
No, it's not easier to track reasoning in AI than in natural intelligent system.
Evolution could code such ability. But the evolution didn't cover 100% tracking of reasoning.
There are very essential reasons for avoiding 100% reasoning tracking.
Such tracking simply makes intelligent system more complex, slower, and therefore very awkward.
And intelligent system is very fragile system even without such "tracking improvement".
Bottom line: First AI prototype doesn't need to track process of its own reasoning. Only reasoning outcomes should be tracked.
7) AIML
> Your AI works more-less in the AIML manner. It might be fun to play
> with, but it's a dead end for serious AI research.
> AIML = "Artificial Intelligence Markup Language", used by Alice and
> other famous bots.
Does AIML have ability to relate every concept to each other?
Do these relations have weights?
Does one word correspond to one concept?
Is learning process automated in Alice?
Is forgetting feature implemented in Alice?
8)
>>If I need 1 digit precision, then my AI needs just to remember few hundred
>>combinations
> searching for stored instances instead of doing real
> calculation is a tremendous inefficiency for a PC based AI.
Calculation is faster than search. But... only if you already know that calculation is necessary. How would you know that calculation is necessary when you parse text?
The only way --- is find what you have in your memory. So you can just find the answer.
But yes, sometimes required calculations are not that easy. In this case the best approach would be to extract approximate results from the main memory and make precise calculations through math functions.
And again, this math functions integration is not top-priority feature. Such feature is necessary for technical tasks, not for basic activity.
>> Intelligence is possible without ability to count.
> Right, but the ability is IMO essential for a good problem solver.
Correct, but you/me/whoever cannot build good problem solver in the first AI prototype anyway.
9) Design is limited, but not dumb
> Don't waste time with a dumb_AI design.
Design is not dumb, it's limited. And can be extended with the second AI prototype. Feel the difference.
10) Real life questions
> If I say obj1
> is above obj2 and then ask if the obj2 is under the obj1 then I expect
> the "Yes" answer based on the scenario model the AI generated in its
> imagination. Not some statistical junk.
This is not real life question to AI.
Far more probable questions are: "here is my resume, please, give me matching openings" or "I'm looking for cell phone with X Y Z features, my friends have P, Q plans, what would you recommend?".
Limited AI can be used for answering these questions.
11) The first AI prototype's target on intelligent jobs market
> AI's ability to produce unique and meaningful thoughts. To me, that's
> where the AI gets interesting and I think it should be addressed in
> the early design stages if you want to design a decent AI..
Humans do all kind of intelligent jobs. Some of them are primitive (like first level tech support), some of them are pretty complex (scientist / software architect / entrepreneur / ...).
It's natural if first AI prototype would try to replace humans on primitive intelligent jobs first. Do you agree?
It's practically impossible to build the first AI prototype which will replace humans on the most advanced intelligent jobs. Agree?
12) "brain design" vs "math calculator"
> don't you see that it's a trully desperate attempt to use
> our brain for something it has an inappropriate design for? The human
> brain is a very poor math-calculator. Let me remind you that your AI
> is being designed to run on a very different platform..
Let me remind you that human brain is far better problem solver than any advanced math package.
Modern math package is not able to solve any problem without human's help.
Human can solve most of the problems without math package.
Think again, what exactly is missing in modern software?
Make your conclusion what the core AI features are.
The platform is irrelevant here ---
So what that you can relatively easy to add calculator feature to the AI. The calculator feature is not critical to intelligence at all. Therefore it would just make the first AI prototype more awkward and more time consuming in development.
Do you want that?
13) Aplicability of math skills to real-life problems
>>> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.
>> How would you reuse this math ability in decision making process like:
>> "finding electrical power provider in my neighborhood"?
> I do not think it would be useful for that purpose (even though a
> powerful AI could make a different conclusion in a particular
> scenario). The point is that general algorithms are useful in many
> cases where particular instance of the algorithm based solution is not
> useful at all.
Do you mean that you have some general algorithm which allows to solve both "Pythagoras Theorem" and "finding electrical power provider in my neighborhood" question?
What is this general algorithm about?
14) Advanced Search
> I do not know how exactly google sorts the results but it seems to
> have some useful algorithms for updating the weights. Are you sure
> your results would be very different?
Yes, they would be different:
1) Google excludes results which doesn't have exact match
2) Google doesn't work with long requests
3) Google has limited ability to understand natural language
4) Google doesn't follow interactive discussion with the user
I have some ideas how to improve final search results. But the first step would be still search on Google :-)
Because of performance and information gathering issues.
> Since you work on a dumb AI which IMO
> does not have a good potential to become strong AI, the related
> discussion is a low priority to me.
Again, it's not dumb. It's limited because it's just the first prototype.
Do you prefer waterfall development process or Rapid Application Development (RAD) in software development?
What about your preferences in research and development?
1) Goals defined by operator are even more dangerous.
2) You can load data from CYC, it this data wouldn't become knowledge. Therefore it wouldn't be learning. And wouldn't be useful.
Goals are still necessary to learn. Only the goals give sense to learning.
3) Why would long question cause "no answer found" result? Quite contrary --- the longer the question, the more links to possible answers could be found.
4)
>> Bottom line: "Generalization is not core AI feature".
> It's not a must for AI, but it's a pretty important feature.
> It's a must for Strong AI. AI is very limited without that.
- I have ideas about how to implement generalization feature.
Would you like to discuss these ideas?
- I think that it's not a good idea to implement generalization in the first AI prototype.
Do you think that generalization should be implemented in the first AI prototype?
5)
> "Ability to logically explain the logic" is just useful for invalid-idea
> debugging.
> So I recommend to (plan to) support the feature.
All features are useful. The problem is --- when we put too many features into software project --- it's just dies.
That's why it's important to correctly prioritize the features.
Do you think that logic should be implemented in the first AI prototype?
50 years of trying to put logic into the first AI prototype proved that it's not very good idea.
6) Reasoning tracking
> It's much easier to track "reasons for all the (sub)decisions"
> for OO-based AI.
No, it's not easier to track reasoning in AI than in natural intelligent system.
Evolution could code such ability. But the evolution didn't cover 100% tracking of reasoning.
There are very essential reasons for avoiding 100% reasoning tracking.
Such tracking simply makes intelligent system more complex, slower, and therefore very awkward.
And intelligent system is very fragile system even without such "tracking improvement".
Bottom line: First AI prototype doesn't need to track process of its own reasoning. Only reasoning outcomes should be tracked.
7) AIML
> Your AI works more-less in the AIML manner. It might be fun to play
> with, but it's a dead end for serious AI research.
> AIML = "Artificial Intelligence Markup Language", used by Alice and
> other famous bots.
Does AIML have ability to relate every concept to each other?
Do these relations have weights?
Does one word correspond to one concept?
Is learning process automated in Alice?
Is forgetting feature implemented in Alice?
8)
>>If I need 1 digit precision, then my AI needs just to remember few hundred
>>combinations
> searching for stored instances instead of doing real
> calculation is a tremendous inefficiency for a PC based AI.
Calculation is faster than search. But... only if you already know that calculation is necessary. How would you know that calculation is necessary when you parse text?
The only way --- is find what you have in your memory. So you can just find the answer.
But yes, sometimes required calculations are not that easy. In this case the best approach would be to extract approximate results from the main memory and make precise calculations through math functions.
And again, this math functions integration is not top-priority feature. Such feature is necessary for technical tasks, not for basic activity.
>> Intelligence is possible without ability to count.
> Right, but the ability is IMO essential for a good problem solver.
Correct, but you/me/whoever cannot build good problem solver in the first AI prototype anyway.
9) Design is limited, but not dumb
> Don't waste time with a dumb_AI design.
Design is not dumb, it's limited. And can be extended with the second AI prototype. Feel the difference.
10) Real life questions
> If I say obj1
> is above obj2 and then ask if the obj2 is under the obj1 then I expect
> the "Yes" answer based on the scenario model the AI generated in its
> imagination. Not some statistical junk.
This is not real life question to AI.
Far more probable questions are: "here is my resume, please, give me matching openings" or "I'm looking for cell phone with X Y Z features, my friends have P, Q plans, what would you recommend?".
Limited AI can be used for answering these questions.
11) The first AI prototype's target on intelligent jobs market
> AI's ability to produce unique and meaningful thoughts. To me, that's
> where the AI gets interesting and I think it should be addressed in
> the early design stages if you want to design a decent AI..
Humans do all kind of intelligent jobs. Some of them are primitive (like first level tech support), some of them are pretty complex (scientist / software architect / entrepreneur / ...).
It's natural if first AI prototype would try to replace humans on primitive intelligent jobs first. Do you agree?
It's practically impossible to build the first AI prototype which will replace humans on the most advanced intelligent jobs. Agree?
12) "brain design" vs "math calculator"
> don't you see that it's a trully desperate attempt to use
> our brain for something it has an inappropriate design for? The human
> brain is a very poor math-calculator. Let me remind you that your AI
> is being designed to run on a very different platform..
Let me remind you that human brain is far better problem solver than any advanced math package.
Modern math package is not able to solve any problem without human's help.
Human can solve most of the problems without math package.
Think again, what exactly is missing in modern software?
Make your conclusion what the core AI features are.
The platform is irrelevant here ---
So what that you can relatively easy to add calculator feature to the AI. The calculator feature is not critical to intelligence at all. Therefore it would just make the first AI prototype more awkward and more time consuming in development.
Do you want that?
13) Aplicability of math skills to real-life problems
>>> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.
>> How would you reuse this math ability in decision making process like:
>> "finding electrical power provider in my neighborhood"?
> I do not think it would be useful for that purpose (even though a
> powerful AI could make a different conclusion in a particular
> scenario). The point is that general algorithms are useful in many
> cases where particular instance of the algorithm based solution is not
> useful at all.
Do you mean that you have some general algorithm which allows to solve both "Pythagoras Theorem" and "finding electrical power provider in my neighborhood" question?
What is this general algorithm about?
14) Advanced Search
> I do not know how exactly google sorts the results but it seems to
> have some useful algorithms for updating the weights. Are you sure
> your results would be very different?
Yes, they would be different:
1) Google excludes results which doesn't have exact match
2) Google doesn't work with long requests
3) Google has limited ability to understand natural language
4) Google doesn't follow interactive discussion with the user
I have some ideas how to improve final search results. But the first step would be still search on Google :-)
Because of performance and information gathering issues.
> Since you work on a dumb AI which IMO
> does not have a good potential to become strong AI, the related
> discussion is a low priority to me.
Again, it's not dumb. It's limited because it's just the first prototype.
Do you prefer waterfall development process or Rapid Application Development (RAD) in software development?
What about your preferences in research and development?
Friday, February 25, 2005
Finding relevant answer in the question context
For some reason Jiri thinks that providing probable answers ordered by relevance wouldn't work good:
> 1) You will display "Top N" answers (in order to not overwhelm user)
> but the right answer might be in N+ because the quantity based "order
> by" will be invalid. Things are changing. An old info (which is
> incorrect today) can easily have more instances in the collected data.
That's why relations are constantly updating.
If wrong answer popped up then it will be applied. This would cause problems. Then relations to this answer would be updated to less desirable.
> People deal with unique scenarios all the time.
Scenarios may be unique, but components of scenarios are not unique at all.
AI would divide scenarios to concepts (words, phrases, and optionally abstract concepts). Then experience regarding all these concepts would be summarized --- relevant concepts would be activated.
> I really do not think we need an AI searching for "average" answers in
> what we wrote. That's just useless.
You are wrong.
Google has huge profit in the business of answering simple and average questions.
> 3) If I'm gonna ask your AI something about Mr. Smith, how does it
> know what Smith I'm talking about. How could I clarify that when
> talking with your AI?
From the context of your question. You would probably put some info about Mr.Smith, right?
All these words, phrases, and optionally abstract concepts would be used for answer search.
> Let's say it's clarified in question #1 and I got an answer, but now,
> I want to ask one more question about Mr. Smith. I have to clarify who
> he is again (assuming it's possible), right?
Short memory would help in this situation.
AI parsed your question to concepts. These concepts are stored into the short memory. Gradually all these concepts would be pushed out of short memory by new concepts, but this "pushing out" process wouldn't happen momentarily --- for some time original concepts (related to Mr. Smith) would be preserved in the short memory. The most relevant (to the Mr. Smith topic) concepts would stay in the short memory even longer.
> Questions and relevant answers are often not together and when they
> are then there is often some "blah blah blah" between, causing your AI
> to display the useless "blah blah blah" instead of the answer.
Why do you assume that my AI would search for the web pages in Question/Answer format only?
Any text would work.
Here are two possible implementations of answer search:
1) "Limited AI" implementation of answer search
Web pages with answers related to user's question could be found by concept match between "question concept list" and "answer concept lists".
2) Strong AI implementation of answer search
Question concept list would generate sequence of softcoded routines (read: flexible routines configured by AI itself), which will do whatever is necessary to find the answer. Possible routines could include search on Google, reading, chatting, emailing, and combination of all this stuff with various parameters, etc...
> 1) You will display "Top N" answers (in order to not overwhelm user)
> but the right answer might be in N+ because the quantity based "order
> by" will be invalid. Things are changing. An old info (which is
> incorrect today) can easily have more instances in the collected data.
That's why relations are constantly updating.
If wrong answer popped up then it will be applied. This would cause problems. Then relations to this answer would be updated to less desirable.
> People deal with unique scenarios all the time.
Scenarios may be unique, but components of scenarios are not unique at all.
AI would divide scenarios to concepts (words, phrases, and optionally abstract concepts). Then experience regarding all these concepts would be summarized --- relevant concepts would be activated.
> I really do not think we need an AI searching for "average" answers in
> what we wrote. That's just useless.
You are wrong.
Google has huge profit in the business of answering simple and average questions.
> 3) If I'm gonna ask your AI something about Mr. Smith, how does it
> know what Smith I'm talking about. How could I clarify that when
> talking with your AI?
From the context of your question. You would probably put some info about Mr.Smith, right?
All these words, phrases, and optionally abstract concepts would be used for answer search.
> Let's say it's clarified in question #1 and I got an answer, but now,
> I want to ask one more question about Mr. Smith. I have to clarify who
> he is again (assuming it's possible), right?
Short memory would help in this situation.
AI parsed your question to concepts. These concepts are stored into the short memory. Gradually all these concepts would be pushed out of short memory by new concepts, but this "pushing out" process wouldn't happen momentarily --- for some time original concepts (related to Mr. Smith) would be preserved in the short memory. The most relevant (to the Mr. Smith topic) concepts would stay in the short memory even longer.
> Questions and relevant answers are often not together and when they
> are then there is often some "blah blah blah" between, causing your AI
> to display the useless "blah blah blah" instead of the answer.
Why do you assume that my AI would search for the web pages in Question/Answer format only?
Any text would work.
Here are two possible implementations of answer search:
1) "Limited AI" implementation of answer search
Web pages with answers related to user's question could be found by concept match between "question concept list" and "answer concept lists".
2) Strong AI implementation of answer search
Question concept list would generate sequence of softcoded routines (read: flexible routines configured by AI itself), which will do whatever is necessary to find the answer. Possible routines could include search on Google, reading, chatting, emailing, and combination of all this stuff with various parameters, etc...
AI output --- response in Natural Language
Jiri> how exactly you want to generate the response sentences?
There are two approaches to generate the answer:
1) Simple approach (for limited AI)
Just copy:
- content of the most relevant page
- reference to this page
(like Google does).
2) Writing text (for strong AI)
When answer is prepared in short memory (in the form of answer concept list) then it should be converted into Natural Language text.
AI already has relations between words and concepts, so we can prepare NL text. The text wouldn't be nice to read, but it would be in a natural language already.
In order to make text output better AI has to remember typical flow of natural language. Such information could be stored in TextPair table.
Information is gathered into TextPair table during massive reading.
Basically TextPair table would have statistical information about typical language constructions.
See also: Writer Prototype
Other things which could improve writing:
1) Phrase concepts could be converted into text too.
2) Output sentences should be kept short. Translate one abstract concept into one sentence would be a good idea.
3) While looking through Pair table, search for synonyms as a substitution for original concepts.
4) The best feature, but the hardest to implement:
Use softcoded routines to generate the text --- for every concept find softcoded routine which relates to both this concept and "writing text" module.
These softcoded routines would output into actual text.
Obviously these softcoded routines should be prepared prior to text generation. It could be done by two standard strong AI learning techniques: "knowledge download" and "experiment".
For example, during experiment successful softcoded routines would be adopted/reinforced. Not efficient softcoded routines would be erased.
> If it involves connecting parts of sentences from various regions of
> data based on statistics then it will often generate garbage.
You are wrong.
Even pretty dumb Elisa text generation algorithm works acceptable.
Why would more efficient algorithm work worse?
There are two approaches to generate the answer:
1) Simple approach (for limited AI)
Just copy:
- content of the most relevant page
- reference to this page
(like Google does).
2) Writing text (for strong AI)
When answer is prepared in short memory (in the form of answer concept list) then it should be converted into Natural Language text.
AI already has relations between words and concepts, so we can prepare NL text. The text wouldn't be nice to read, but it would be in a natural language already.
In order to make text output better AI has to remember typical flow of natural language. Such information could be stored in TextPair table.
Information is gathered into TextPair table during massive reading.
Basically TextPair table would have statistical information about typical language constructions.
See also: Writer Prototype
Other things which could improve writing:
1) Phrase concepts could be converted into text too.
2) Output sentences should be kept short. Translate one abstract concept into one sentence would be a good idea.
3) While looking through Pair table, search for synonyms as a substitution for original concepts.
4) The best feature, but the hardest to implement:
Use softcoded routines to generate the text --- for every concept find softcoded routine which relates to both this concept and "writing text" module.
These softcoded routines would output into actual text.
Obviously these softcoded routines should be prepared prior to text generation. It could be done by two standard strong AI learning techniques: "knowledge download" and "experiment".
For example, during experiment successful softcoded routines would be adopted/reinforced. Not efficient softcoded routines would be erased.
> If it involves connecting parts of sentences from various regions of
> data based on statistics then it will often generate garbage.
You are wrong.
Even pretty dumb Elisa text generation algorithm works acceptable.
Why would more efficient algorithm work worse?
Thursday, February 24, 2005
Strong AI: finding cause and effect
Jiri,
You claim that my strong AI design wouldn't be able to handle cause-effect relations. But the whole memory structure was designed exactly for the purpose of finding these cause-effect relations.
Some history
Originally I put into main memory design two types of relations:
1) Cause-effect relations.
2) Parent-child relations.
But later on I decided that system would be simpler and still work efficiently if I keep only one type of relations between concepts: cause-effect relations.
Back to current design
Strong AI design assumes that the main memory would keep millions of concepts connected by hundreds of millions cause-effect relations.
With such memory it would be easy to find the cause(s) for any specified effect(s).
It's also easy to find the effect(s) for any specified cause(s).
You next question probably would be: "how can we put all these millions of cause-effect relations into the main memory?".
One word answer would be: "Learning".
Short answer would be: "Read experiment and/or event correlation analyzer articles".
If you don't have time to read "Learning", "Experiment", and "Event Correlation Analyzer" read at least this simplified example:
-----
AI sends message: "Hi, dude".
AI receives message: "Hello".
Event correlation analyzer adds cause-effect relations between concepts "Hi, dude" and "Hello".
-----
You can find full version of this example on experiment page.
You claim that my strong AI design wouldn't be able to handle cause-effect relations. But the whole memory structure was designed exactly for the purpose of finding these cause-effect relations.
Some history
Originally I put into main memory design two types of relations:
1) Cause-effect relations.
2) Parent-child relations.
But later on I decided that system would be simpler and still work efficiently if I keep only one type of relations between concepts: cause-effect relations.
Back to current design
Strong AI design assumes that the main memory would keep millions of concepts connected by hundreds of millions cause-effect relations.
With such memory it would be easy to find the cause(s) for any specified effect(s).
It's also easy to find the effect(s) for any specified cause(s).
You next question probably would be: "how can we put all these millions of cause-effect relations into the main memory?".
One word answer would be: "Learning".
Short answer would be: "Read experiment and/or event correlation analyzer articles".
If you don't have time to read "Learning", "Experiment", and "Event Correlation Analyzer" read at least this simplified example:
-----
AI sends message: "Hi, dude".
AI receives message: "Hello".
Event correlation analyzer adds cause-effect relations between concepts "Hi, dude" and "Hello".
-----
You can find full version of this example on experiment page.
Emotions in Strong AI
> Are emotions part of the "main functionality"?
Yes.
Emotions are the part of the core AI functionality.
But in order to correctly understand my answer you need to understand what I understand under emotions.
Emotion is kind of advanced reflex. Typically emotion consists of a group of reflexes working together. There could be many reflexes in a single emotion. That's why it's hard to predict emotion even if you know behavior of every reflex. The problem of prediction of emotional response is actually worse because usually observer doesn't know what reflexes affect emotional result).
In other hand, it is not that hard to calculate result of emotion inside of the AI system.
It just takes a bunch of straightforward calculations.
This calculations are really simple.
Example:
Let assume that reflex1 (softcoded routine) activates concept e1 if concept c1 is activated.
...
Let assume that reflexN activates concept eN if concept c1 is activated.
("c" states for "cause" and "e" stands for "effect").
Whole emotion would activate concepts e1 ... eN.
These concepts e1 ... eN represent the emotional response of AI.
Yes.
Emotions are the part of the core AI functionality.
But in order to correctly understand my answer you need to understand what I understand under emotions.
Emotion is kind of advanced reflex. Typically emotion consists of a group of reflexes working together. There could be many reflexes in a single emotion. That's why it's hard to predict emotion even if you know behavior of every reflex. The problem of prediction of emotional response is actually worse because usually observer doesn't know what reflexes affect emotional result).
In other hand, it is not that hard to calculate result of emotion inside of the AI system.
It just takes a bunch of straightforward calculations.
This calculations are really simple.
Example:
Let assume that reflex1 (softcoded routine) activates concept e1 if concept c1 is activated.
...
Let assume that reflexN activates concept eN if concept c1 is activated.
("c" states for "cause" and "e" stands for "effect").
Whole emotion would activate concepts e1 ... eN.
These concepts e1 ... eN represent the emotional response of AI.
Wednesday, February 23, 2005
Simple AI as a necessary prototype for complex AI
Jiri,
> I would not have any problem with (AI's) hardcoded goals if they are
> guaranteed to stay fully compatible with our goals.
Nobody can give such guarantee.
For instance, desire to protect their families was a component of motivation of suicide pilots who crashed into Twin Towers in NYC in Sep 2001.
>Bottom line:
>In order to make AI to achieve such "high level goals",
>operator/admin has to carefully design set of "simple goals".
Having enough data, AI can generate all the needed sub-goals and solutions.
Nope.
Without assistance (in form of goals) it's practically impossible to learn.
Without learning it's practically impossible to achieve high level goals.
Without sexual instinct reproduction is practically impossible.
> Optionally, admin can specify rules which cannot be broken during the
> problem solving process.
Problem solving process is too delicate to give it to an admin.
Solving process should be implemented by developer under strict architect supervision.
>>It is impossible to educate without "desire to learn"
>> (read: "learning hardcoded goals") already implemented in AI.
> Not sure if I understand correctly. Assuming I do, I would say it
> applies to people, not to AI.
It applies to any intelligent learning system.
> The AI needs to be able to generate customized models for particular
> problem scenarios. The same question can be asked under different
> scenarios and the correct answers might be different or even contrary.
Different scenario means that this different scenario will be mentioned in the question.
In case if different scenario is mentioned in the question --- simple AI would generate different answer.
> That's one of the reasons why your AI cannot work well. Another one is
> that it cannot generalize.
Generalization is different feature. It could be implemented later.
BTW, most humans don't generalize well.
They can borrow generalizations from other people, but typically don't create their own generalizations.
Simple AI will be able to borrow generalization from NL text.
Bottom line: "Generalization ability is not core AI feature".
>>BTW, these "logically advanced" humans are not necessarily the most
>>successful ones :-)
>Right.. Success takes some luck..
This is not about luck.
Strong communication skills and efficient set of goals are far more important for intelligence than advanced logical skills.
> The most basic demo might be doable in a few days.
Nope :-(
> The parser which does the inserts should be relatively easy to do.
Correct, I successfully implemented it.
But this is not full demo. Therefore there is nothing to show / experiment with.
> Put the sentence-parts into a single table as you have originally planned.
> Let it learn from locally stored text files...
This learning part takes longer time to develop.
>>If I need 1 digit precision, then my AI need just remember few hundred
>>combinations
> There is an infinite number of combinations.
With 10 digits???
> It's terribly limited if it cannot do calculation it did not observed.
Intelligence is possible without ability to count.
It's proved by history.
>>Also my AI would use special math functions for calculations :-)
> Good, you are getting there ;-)..
Well, NL text has to be processed first. After that needs for calculations should be identified. Then parameters should be prepared and passed to the math functions.
For me it's obvious that AI can work without math, but cannot work without NL processing.
>>> The system needs to understand symbols "2", "+", "4", "=" separately.
>>Yes, but in a limited way.
>>Concept "2" may have relations with "1 + 1", "0 + 2", and "1 * 2".
>>"=" may be associated with internal math calculator. And with "2*2 = 4".
>>Etc.
> Crazy ;-)..
Sorry, but that's how our minds work.
> Here you go. Do not waste time with lots of coding. Google is your AI.
> The problem is that you would need a lot more magic than some synonyms
> from webster to turn it into a clever AI.
I cannot update Google's links' weights.
That's why I cannot just play with Google.
> I would not have any problem with (AI's) hardcoded goals if they are
> guaranteed to stay fully compatible with our goals.
Nobody can give such guarantee.
For instance, desire to protect their families was a component of motivation of suicide pilots who crashed into Twin Towers in NYC in Sep 2001.
>Bottom line:
>In order to make AI to achieve such "high level goals",
>operator/admin has to carefully design set of "simple goals".
Having enough data, AI can generate all the needed sub-goals and solutions.
Nope.
Without assistance (in form of goals) it's practically impossible to learn.
Without learning it's practically impossible to achieve high level goals.
Without sexual instinct reproduction is practically impossible.
> Optionally, admin can specify rules which cannot be broken during the
> problem solving process.
Problem solving process is too delicate to give it to an admin.
Solving process should be implemented by developer under strict architect supervision.
>>It is impossible to educate without "desire to learn"
>> (read: "learning hardcoded goals") already implemented in AI.
> Not sure if I understand correctly. Assuming I do, I would say it
> applies to people, not to AI.
It applies to any intelligent learning system.
> The AI needs to be able to generate customized models for particular
> problem scenarios. The same question can be asked under different
> scenarios and the correct answers might be different or even contrary.
Different scenario means that this different scenario will be mentioned in the question.
In case if different scenario is mentioned in the question --- simple AI would generate different answer.
> That's one of the reasons why your AI cannot work well. Another one is
> that it cannot generalize.
Generalization is different feature. It could be implemented later.
BTW, most humans don't generalize well.
They can borrow generalizations from other people, but typically don't create their own generalizations.
Simple AI will be able to borrow generalization from NL text.
Bottom line: "Generalization ability is not core AI feature".
>>BTW, these "logically advanced" humans are not necessarily the most
>>successful ones :-)
>Right.. Success takes some luck..
This is not about luck.
Strong communication skills and efficient set of goals are far more important for intelligence than advanced logical skills.
> The most basic demo might be doable in a few days.
Nope :-(
> The parser which does the inserts should be relatively easy to do.
Correct, I successfully implemented it.
But this is not full demo. Therefore there is nothing to show / experiment with.
> Put the sentence-parts into a single table as you have originally planned.
> Let it learn from locally stored text files...
This learning part takes longer time to develop.
>>If I need 1 digit precision, then my AI need just remember few hundred
>>combinations
> There is an infinite number of combinations.
With 10 digits???
> It's terribly limited if it cannot do calculation it did not observed.
Intelligence is possible without ability to count.
It's proved by history.
>>Also my AI would use special math functions for calculations :-)
> Good, you are getting there ;-)..
Well, NL text has to be processed first. After that needs for calculations should be identified. Then parameters should be prepared and passed to the math functions.
For me it's obvious that AI can work without math, but cannot work without NL processing.
>>> The system needs to understand symbols "2", "+", "4", "=" separately.
>>Yes, but in a limited way.
>>Concept "2" may have relations with "1 + 1", "0 + 2", and "1 * 2".
>>"=" may be associated with internal math calculator. And with "2*2 = 4".
>>Etc.
> Crazy ;-)..
Sorry, but that's how our minds work.
> Here you go. Do not waste time with lots of coding. Google is your AI.
> The problem is that you would need a lot more magic than some synonyms
> from webster to turn it into a clever AI.
I cannot update Google's links' weights.
That's why I cannot just play with Google.
Limited AI
As my first AI prototype, I'm going to implement AI with the limited set of features.
Such "limited AI" (or "simple AI") project should be relatively easy to implement.
"Limited AI" project should have its own business sense.
Features from this "Limited AI" should be useful for "Full AI" ("Complex AI" / "Strong AI").
Here are these "Limited AI" features:
1) Memory in form of Neural Net:
Graph with concepts as a nodes and relations as an edges.
2) Natural Language processing.
Natural language is converted into Concepts. Appropriate relations are created.
3) Learning from Feedback.
Based on feedback from users/experts relations between Concepts are updated.
Feedback User Interface should be implemented in easy-to-use form.
"Learning from Feedback" requires implementation of a simple prototype of Motivation System
"Learning from Feedback" has limited learning ability.
4) Forgetting.
Relations are getting weaker with time (unless learning happens).
Very weak relations are deleted completely from the system.
Same forgetting mechanism can be applied to concepts.
What is not included into "limited AI":
1) Set of hardcoded goals (full "Motivation System").
2) "Self-programming" (Programmator, softcoded routines)
Such "limited AI" (or "simple AI") project should be relatively easy to implement.
"Limited AI" project should have its own business sense.
Features from this "Limited AI" should be useful for "Full AI" ("Complex AI" / "Strong AI").
Here are these "Limited AI" features:
1) Memory in form of Neural Net:
Graph with concepts as a nodes and relations as an edges.
2) Natural Language processing.
Natural language is converted into Concepts. Appropriate relations are created.
3) Learning from Feedback.
Based on feedback from users/experts relations between Concepts are updated.
Feedback User Interface should be implemented in easy-to-use form.
"Learning from Feedback" requires implementation of a simple prototype of Motivation System
"Learning from Feedback" has limited learning ability.
4) Forgetting.
Relations are getting weaker with time (unless learning happens).
Very weak relations are deleted completely from the system.
Same forgetting mechanism can be applied to concepts.
What is not included into "limited AI":
1) Set of hardcoded goals (full "Motivation System").
2) "Self-programming" (Programmator, softcoded routines)
Tuesday, February 22, 2005
Simple AI as a necessary prototype for complex AI
Jiri,
> Call it goal or "attraction point", I would not recommend to hardcode it.
Sometimes it's easier to code, than to implement "goal designer" for administrator.
> There should be some sort of Add/Edit/Delete mode for that (possibly
> for Admin(s) only).
You can implement one of hardcoded goals in form of "obey to administrator".
This is still the option.
> But I think you should be able to describe the source scenario, the
> target scenario and optionally some rules which cannot be broken. Then
> The AI should generate solution steps (assuming it has relevant
> resources).
- Babies don't understand "source scenario", but they are still can learn. Why AI cannot be the same?
- I think you're still missing the point of what the hardcoded goals are.
Goal is not final point in (self)development.
Humans have hardcoded goals, but they don't have hardcoded goals like "become a president of the US" or "earn $1 billion". These two examples are softcoded goals.
Bottom line:
In order to make AI to achieve such "high level goals", operator/admin has to carefully design set of "simple goals".
Most probably "simple goals" and "high level goals" would be different.
>> Goals provide direction of learning/self development.
> no need for "hardcode". Editable = better.
It is impossible to educate without "desire to learn" (read: "learning hardcoded goals") already implemented in AI.
> you need to implement imagination in order to develop
> decent AI. I mean the AI needs to be able to generate some sort
> of model of the scene it thinks about. Not necessarily a 3D simulation
> but some type of model it could play with in it's mind.
Model - yes.
Visual model - not necessarily (hint: blind people are still intelligent).
Actually whole memory structure is designed for building models (concepts and relations between concepts).
>> Hardcoded goals should evaluate feedback and make conclusions.
>> Not necessarily logical conclusions. More like emotional conclusions.
> I think everything should be logical and the AI should be able to explain > the logic whenever requested by an authorized user..
It's nice to have such ability, but... not necessarily.
Ability to logically explain the logic is coming with experience, education, a lot of thinking, conversations, and time.
Children mostly cannot logically explain why they behave in a certain way. But they still learn.
Adults have limited ability to logically explain why they behave in a certain way.
Only the most logically advanced humans can logically explain almost everything.
BTW, these "logically advanced" humans are not necessarily the most successful ones :-)
Bottom line: logical explanation ability is not a core AI feature.
> I think when you move to the complex problem solving, you will find out
> that the basic features you are playing with now are not so useful..
> When do you think you will be ready for the complex AI?
Not soon :-(
I need to implement simple AI first. It also takes a lot of time --- you know.
But what I know for sure --- if I try to implement complex AI (strong AI) as my first AI prototype - I would definitely fail.
Agree?
> I think you need a demo to see the problem.
> Why don't you code it?
Time. Development always takes a lot of time. Especially Research and Development.
>> Majority of humans' decisions are BASED on this statistical factor.
>> This majority consists of very simple problems/decisions though. (Like
>> if I see "2 + 2" then I remember "4").
> It's funny you have used this example..
> The world of math alone is a killer for your AI.
It's not exactly a math. It's just remembering right answer in a particular case.
> You cannot store all that info.
Why not?
If I need 1 digit precision, then my AI needs to remember just few hundred combinations like:
==========
1 + 1 = 2
1 + 2 = 3
...
9 + 9 = 18
...
9 * 9 = 81
...
==========
Also my AI would use special math functions for calculations :-)
> The system needs to understand symbols "2", "+", "4", "=" separately.
Yes, but in a limited way.
Concept "2" may have relations with "1 + 1", "0 + 2", and "1 * 2".
"=" may be associated with internal math calculator. And with "2*2 = 4".
Etc.
Well, anyway, all these stuff is not for the nearest prototypes
:-)
>> More complex decision making would be unavailable for this "statistical"
>> approach. BUT(!) --- this "statistical" approach would help to quickly
>> find limited set of possible solutions.
> Does not sound like an interesting AI to me.
I think that "simple AI features set" is:
#1 - required for simple AI implementation.
#2 - sufficient for simple AI implementation.
#3 - required for complex AI implementation.
#4 - not sufficient for complex AI implementation.
What statements (#1, #2, #3, #4) do you agree/disagree with?
>> Keep in mind, that more complex algorithms are too slow and cannot solve
>> the problem without simple "statistical" algorithm.
> Yes, but not with your type of "statistical" algorithm.
> The system needs to be able to work with "formulas"
> and parameter-variables, not just remembering "formula"-instances.
> with particular parameter-instances without being able to
> automatically reuse the "formulas" using various parameter values.
Most of the humans are not able to work with formulas.
They are still intelligent though.
> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.
> Then it can use it for triangles of all sizes.
How would you reuse this math ability in decision making process like: "finding electrical power provider in my neighborhood"?
I think your algorithm would be not reusable at all.
> Your AI (as I understand it) can solve related question only
> if it finds an example with the particular numbers in it's memory.
You understand it almost right.
The only correction is that: "AI would be use external knowledge, like Google or other intelligent experts".
> It cannot handle the general way of thinking.
Searching in internal/external memory is 90% of general way of thinking.
Another 9% is results evaluation against set of goals (both hardcoded and softcoded).
And another 1% is inventions. This 1% is:
- not necessary.
- is impossible without "memory search" and "results evaluation".
> That's just terribly limited/inefficient. I said "formula" and I used a
> math example but this applies to all kinds of processes the AI needs to
> understand in order to solve something.
What is your structure of memory which would be flexible enough to keep heterogeneous information?
How functionality reuse would be implemented in your memory structure?
> Call it goal or "attraction point", I would not recommend to hardcode it.
Sometimes it's easier to code, than to implement "goal designer" for administrator.
> There should be some sort of Add/Edit/Delete mode for that (possibly
> for Admin(s) only).
You can implement one of hardcoded goals in form of "obey to administrator".
This is still the option.
> But I think you should be able to describe the source scenario, the
> target scenario and optionally some rules which cannot be broken. Then
> The AI should generate solution steps (assuming it has relevant
> resources).
- Babies don't understand "source scenario", but they are still can learn. Why AI cannot be the same?
- I think you're still missing the point of what the hardcoded goals are.
Goal is not final point in (self)development.
Humans have hardcoded goals, but they don't have hardcoded goals like "become a president of the US" or "earn $1 billion". These two examples are softcoded goals.
Bottom line:
In order to make AI to achieve such "high level goals", operator/admin has to carefully design set of "simple goals".
Most probably "simple goals" and "high level goals" would be different.
>> Goals provide direction of learning/self development.
> no need for "hardcode". Editable = better.
It is impossible to educate without "desire to learn" (read: "learning hardcoded goals") already implemented in AI.
> you need to implement imagination in order to develop
> decent AI. I mean the AI needs to be able to generate some sort
> of model of the scene it thinks about. Not necessarily a 3D simulation
> but some type of model it could play with in it's mind.
Model - yes.
Visual model - not necessarily (hint: blind people are still intelligent).
Actually whole memory structure is designed for building models (concepts and relations between concepts).
>> Hardcoded goals should evaluate feedback and make conclusions.
>> Not necessarily logical conclusions. More like emotional conclusions.
> I think everything should be logical and the AI should be able to explain > the logic whenever requested by an authorized user..
It's nice to have such ability, but... not necessarily.
Ability to logically explain the logic is coming with experience, education, a lot of thinking, conversations, and time.
Children mostly cannot logically explain why they behave in a certain way. But they still learn.
Adults have limited ability to logically explain why they behave in a certain way.
Only the most logically advanced humans can logically explain almost everything.
BTW, these "logically advanced" humans are not necessarily the most successful ones :-)
Bottom line: logical explanation ability is not a core AI feature.
> I think when you move to the complex problem solving, you will find out
> that the basic features you are playing with now are not so useful..
> When do you think you will be ready for the complex AI?
Not soon :-(
I need to implement simple AI first. It also takes a lot of time --- you know.
But what I know for sure --- if I try to implement complex AI (strong AI) as my first AI prototype - I would definitely fail.
Agree?
> I think you need a demo to see the problem.
> Why don't you code it?
Time. Development always takes a lot of time. Especially Research and Development.
>> Majority of humans' decisions are BASED on this statistical factor.
>> This majority consists of very simple problems/decisions though. (Like
>> if I see "2 + 2" then I remember "4").
> It's funny you have used this example..
> The world of math alone is a killer for your AI.
It's not exactly a math. It's just remembering right answer in a particular case.
> You cannot store all that info.
Why not?
If I need 1 digit precision, then my AI needs to remember just few hundred combinations like:
==========
1 + 1 = 2
1 + 2 = 3
...
9 + 9 = 18
...
9 * 9 = 81
...
==========
Also my AI would use special math functions for calculations :-)
> The system needs to understand symbols "2", "+", "4", "=" separately.
Yes, but in a limited way.
Concept "2" may have relations with "1 + 1", "0 + 2", and "1 * 2".
"=" may be associated with internal math calculator. And with "2*2 = 4".
Etc.
Well, anyway, all these stuff is not for the nearest prototypes
:-)
>> More complex decision making would be unavailable for this "statistical"
>> approach. BUT(!) --- this "statistical" approach would help to quickly
>> find limited set of possible solutions.
> Does not sound like an interesting AI to me.
I think that "simple AI features set" is:
#1 - required for simple AI implementation.
#2 - sufficient for simple AI implementation.
#3 - required for complex AI implementation.
#4 - not sufficient for complex AI implementation.
What statements (#1, #2, #3, #4) do you agree/disagree with?
>> Keep in mind, that more complex algorithms are too slow and cannot solve
>> the problem without simple "statistical" algorithm.
> Yes, but not with your type of "statistical" algorithm.
> The system needs to be able to work with "formulas"
> and parameter-variables, not just remembering "formula"-instances.
> with particular parameter-instances without being able to
> automatically reuse the "formulas" using various parameter values.
Most of the humans are not able to work with formulas.
They are still intelligent though.
> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.
> Then it can use it for triangles of all sizes.
How would you reuse this math ability in decision making process like: "finding electrical power provider in my neighborhood"?
I think your algorithm would be not reusable at all.
> Your AI (as I understand it) can solve related question only
> if it finds an example with the particular numbers in it's memory.
You understand it almost right.
The only correction is that: "AI would be use external knowledge, like Google or other intelligent experts".
> It cannot handle the general way of thinking.
Searching in internal/external memory is 90% of general way of thinking.
Another 9% is results evaluation against set of goals (both hardcoded and softcoded).
And another 1% is inventions. This 1% is:
- not necessary.
- is impossible without "memory search" and "results evaluation".
> That's just terribly limited/inefficient. I said "formula" and I used a
> math example but this applies to all kinds of processes the AI needs to
> understand in order to solve something.
What is your structure of memory which would be flexible enough to keep heterogeneous information?
How functionality reuse would be implemented in your memory structure?
Thursday, February 17, 2005
Goals and decision making
Keep in mind, that goal is not something ultimate. Goal is more like attraction point.
There could be several attraction points.
They shouldn't conflict with each other.
But they could compete with each other. Or quite contrary --- help to each other.
Goals provide direction of learning/self development.
> If the feedback is also NL then it's not very clear to me how you can
> increase understanding to the input.
Feedback could be in different form.
For instance, "satisfaction signal".
Another option --- NL. But special NL parser should be able to extract key words from NL and transform them into "satisfaction signal".
> other thing is that the AI is IMO not supposed to evaluate goals. It
> should be evaluating solutions.
Correct. AI should not evaluate hardcoded goals.
Hardcoded goals should evaluate feedback and make conclusions.
Not necessarily logical conclusions. More like emotional conclusions.
> I do not understand how you want to get complex problem solving
> working. That requires various types of reasoning.
I'm thinking about implementation of simple problem solving.
You are right --- complex problem solving requires more features.
I think basic features have to be implemented first.
Basic features would help to implement simple problem solving.
> Even if you combine ALL the words in all possible ways
Not in all possible combinations, but in "used combination".
> and if you have
> all that statistically sorted based on how often various combinations
> go together, it will be extremely poor problem solver because majority
> of solutions are just not based on that kind of statistical factor
Majority of humans' decisions are BASED on this statistical factor.
This majority consists of very simple problems/decisions though. (Like if I see "2 + 2" then I remember "4").
More complex decision making would be unavailable for this "statistical" approach. BUT(!) --- this "statistical" approach would help to quickly find limited set of possible solutions. And then more complex decision making algorithms would select right answer.
Keep in mind, that more complex algorithms are too slow and cannot solve the problem without simple "statistical" algorithm.
NOTICE: This email is intended solely for the use of the individual to whom it is addressed and may contain information that is privileged, confidential or otherwise exempt from disclosure. If the reader of this email is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please immediately notify us by telephone and return the original message to us at the listed email address. Thank You.
There could be several attraction points.
They shouldn't conflict with each other.
But they could compete with each other. Or quite contrary --- help to each other.
Goals provide direction of learning/self development.
> If the feedback is also NL then it's not very clear to me how you can
> increase understanding to the input.
Feedback could be in different form.
For instance, "satisfaction signal".
Another option --- NL. But special NL parser should be able to extract key words from NL and transform them into "satisfaction signal".
> other thing is that the AI is IMO not supposed to evaluate goals. It
> should be evaluating solutions.
Correct. AI should not evaluate hardcoded goals.
Hardcoded goals should evaluate feedback and make conclusions.
Not necessarily logical conclusions. More like emotional conclusions.
> I do not understand how you want to get complex problem solving
> working. That requires various types of reasoning.
I'm thinking about implementation of simple problem solving.
You are right --- complex problem solving requires more features.
I think basic features have to be implemented first.
Basic features would help to implement simple problem solving.
> Even if you combine ALL the words in all possible ways
Not in all possible combinations, but in "used combination".
> and if you have
> all that statistically sorted based on how often various combinations
> go together, it will be extremely poor problem solver because majority
> of solutions are just not based on that kind of statistical factor
Majority of humans' decisions are BASED on this statistical factor.
This majority consists of very simple problems/decisions though. (Like if I see "2 + 2" then I remember "4").
More complex decision making would be unavailable for this "statistical" approach. BUT(!) --- this "statistical" approach would help to quickly find limited set of possible solutions. And then more complex decision making algorithms would select right answer.
Keep in mind, that more complex algorithms are too slow and cannot solve the problem without simple "statistical" algorithm.
NOTICE: This email is intended solely for the use of the individual to whom it is addressed and may contain information that is privileged, confidential or otherwise exempt from disclosure. If the reader of this email is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please immediately notify us by telephone and return the original message to us at the listed email address. Thank You.
Saturday, February 12, 2005
Thursday, February 10, 2005
Brad Wyble
Brad Wyble
Email: B.Wyble@kent.ac.uk
Address: Computing Laboratory
University of Kent
Canterbury
Kent, CT2 7NF
Telephone: +44 (0)1227 827553 (direct line)
Facsimile: +44 (0)1227 762811
=================
Imagine,... you've got your 10^6 CPU's and you want to make
an AI. You have to devote some percentage of those CPU's to "thinking"
(ie analyzing and representing information) and the remainder to
restricting that thinking to some useful task. No one would argue, I
hope, that it's useful to blindly analyze all available information.
The part that's directing your resources is the control architechture and
it requires meticulous engineering and difficult design decisions.
What percentage do you allocate?
5%? 20%? The more you spend, the more efficiently the remaining CPU
power is spent. There's got to be a point at which you achieve a maximum
efficiency for your blob of silicon.
The brain is thoroughly riddled with such control architechture, starting
at the retina and moving back, it's a constant process of throwing out
information and compressing what's left into a more compact form. That's
really all your brain is doing from the moment a photon hits your eye,
determining whether or not you should ignore that photon. And it is a
Very Hard problem.
================
I used to think AGI was
practically a done deal. I figured we were 20 years out.
7 years in Neuroscience boot-camp changed that for good. I think anyone
who's truly serious about AI should spend some time studying at least one
system of the brain. And I mean really drill down into the primary
literature, don't just settle for the stuff on the surface which paints
nice rosy pictures.
Delve down to network anatomy, let your mind be blown by the precision and
complexity of the connectivity patterns.
Then delve down to cellular anatomy, come to understand how tightly
compact and well engineered our 300 billion CPUs are. Layers and layers
of feedback regulation interwoven with an exquisite perfection, both
within cells and between cells. What we don't know yet is truly
staggering.
I guarantee this research will permanently expand your mind.
Your idea of what a "Hard" problem is will ratchet up a few notches, and
you will never again look upon any significant slice of the AGI pie as
something simple enough that it can can be done by GA running on a few kg
of molecular switches.
Email: B.Wyble@kent.ac.uk
Address: Computing Laboratory
University of Kent
Canterbury
Kent, CT2 7NF
Telephone: +44 (0)1227 827553 (direct line)
Facsimile: +44 (0)1227 762811
=================
Imagine,... you've got your 10^6 CPU's and you want to make
an AI. You have to devote some percentage of those CPU's to "thinking"
(ie analyzing and representing information) and the remainder to
restricting that thinking to some useful task. No one would argue, I
hope, that it's useful to blindly analyze all available information.
The part that's directing your resources is the control architechture and
it requires meticulous engineering and difficult design decisions.
What percentage do you allocate?
5%? 20%? The more you spend, the more efficiently the remaining CPU
power is spent. There's got to be a point at which you achieve a maximum
efficiency for your blob of silicon.
The brain is thoroughly riddled with such control architechture, starting
at the retina and moving back, it's a constant process of throwing out
information and compressing what's left into a more compact form. That's
really all your brain is doing from the moment a photon hits your eye,
determining whether or not you should ignore that photon. And it is a
Very Hard problem.
================
I used to think AGI was
practically a done deal. I figured we were 20 years out.
7 years in Neuroscience boot-camp changed that for good. I think anyone
who's truly serious about AI should spend some time studying at least one
system of the brain. And I mean really drill down into the primary
literature, don't just settle for the stuff on the surface which paints
nice rosy pictures.
Delve down to network anatomy, let your mind be blown by the precision and
complexity of the connectivity patterns.
Then delve down to cellular anatomy, come to understand how tightly
compact and well engineered our 300 billion CPUs are. Layers and layers
of feedback regulation interwoven with an exquisite perfection, both
within cells and between cells. What we don't know yet is truly
staggering.
I guarantee this research will permanently expand your mind.
Your idea of what a "Hard" problem is will ratchet up a few notches, and
you will never again look upon any significant slice of the AGI pie as
something simple enough that it can can be done by GA running on a few kg
of molecular switches.
Wednesday, February 02, 2005
http://en.wikipedia.org/wiki/Natural_language_processing
Some problems which make NLP difficult
Word boundary detection
In spoken language, there are usually no gaps between words; where to place the word boundary often depends on what choice makes the most sense grammatically and given the context. In written form, languages like Chinese do not signal word boundaries either.
Word sense disambiguation
Any given word can have several different meanings; we have to select the meaning which makes the most sense in context.
Syntactic ambiguity
The grammar for natural languages is not unambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information.
Imperfect or irregular input
Foreign or regional accents and vocal impediments in speech; typing or grammatical errors, OCR errors in texts.
Speech acts and plans
Sentences often don't mean what they literally say; for instance a good answer to "Can you pass the salt" is to pass the salt; in most contexts "Yes" is not a good answer, although "No" is better and "I'm afraid that I can't see it" is better yet. Or again, if a class was not offered last year, "The class was not offered last year" is a better answer to the question "How many students failed the class last year?" than "None" is.
Word boundary detection
In spoken language, there are usually no gaps between words; where to place the word boundary often depends on what choice makes the most sense grammatically and given the context. In written form, languages like Chinese do not signal word boundaries either.
Word sense disambiguation
Any given word can have several different meanings; we have to select the meaning which makes the most sense in context.
Syntactic ambiguity
The grammar for natural languages is not unambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information.
Imperfect or irregular input
Foreign or regional accents and vocal impediments in speech; typing or grammatical errors, OCR errors in texts.
Speech acts and plans
Sentences often don't mean what they literally say; for instance a good answer to "Can you pass the salt" is to pass the salt; in most contexts "Yes" is not a good answer, although "No" is better and "I'm afraid that I can't see it" is better yet. Or again, if a class was not offered last year, "The class was not offered last year" is a better answer to the question "How many students failed the class last year?" than "None" is.
Thursday, January 27, 2005
General intelligence without natural language processing is impossible
Jiri,
1) Yes, human knowledge base is limited.
2) Children's knowledge is limited even more.
That's why our children's intelligence is essentially weaker than adults' intelligence.
3) ARTCOM's knowledge would be EXTREMELY limited (because of very poor communication channel).
That's why ARTCOM's general intelligence would be EXTREMELY weak.
4) If you suggest to use new language for communication with ARTCOM then it's not user friendly already.
What is more important: knowledge base on the internet is practically not available for ARTCOME.
5) Yes, ability to understand available data is critical.
That's exactly the direction to dig in the General AI research.
And this is directly relates to natural language reading problem.
6) You can measure my intelligence even if I have no external tools. I still have knowledge database in my head.
What is important here: my intelligence with external tools (Google/Internet/Other experts/...) would be essentially higher than my intelligence without external tools.
BTW, John Searle did big mistake in his Chinese Room Argument.
Student with dictionary is different system than student without dictionary.
No wonder that "student with dictionary system" can speak Chinese, but "student without dictionary system" cannot.
Intelligence level of these two systems also differs.
7) Difference between "human search" and "intelligent calculator" is huge:
Humans already have huge knowledge base of possible solutions. Current problem activates the most relevant solution in the knowledge base.
Then human tries to apply these most relevant solutions and check the results. Solution which brings the best result is selected.
(The quality of the result is evaluated against human's goals).
In addition --- this selected solution, relevant information, and relations between problem and solution are added to the knowledge base for future use.
"Intelligent calculator" behaves in different way. Calculator doesn't use knowledge base, because it doesn't have solutions knowledge base.
Calculator doesn't have the ways to find the solution in the knowledge database either.
Calculator just applies some calculations to input data and returns the result.
You are building calculator.
Yes, you are going to implement small database based on past experience with user stories.
But the key to efficient general intelligence is HUGE database, not small one.
8) If you cover essential amount of concepts and relations between concepts that you will quickly get big database.
Why do you think it would be small?
It would be small only if your input channel is inefficient (like special language which wasn't used before).
9) Little data in knowledge base is absolutely not enough for general intelligence!
It could be little data in the question, but knowledge base has to be HUGE.
10) Since your system with small knowledge base would be inefficient --- nobody would put data in your database. Therefore the project would die.
11) Contrary to General Intelligence, HTML prototype perfectly works with one page. That's why some people learned HTML.
But even in case of HTML it took years before HTML became popular.
12) Your special language has other disadvantages aside of "nobody use it" problem.
It is less efficient than natural languages in supporting "General Intelligence Thought Process" and "General Topic Conversations".
13) If my long term memory doesn't accept any new knowledge then:
I still will be able to apply solutions from my huge database to new problems which are similar to old problems.
But this would be possible only because I already have HUGE knowledge base.
Worst part in "read-only memory" is that deliberation would be impossible.
Adapting to the changes in the world would be impossible.
Improving solution solving skills would be impossible.
Too many problems. Even with HUGE knowledge base.
Without HUGE knowledge base already in place there would be practically no intelligence.
14) If you want to keep things as simple as possible --- don't invent your own language. This is not just useless it's harmful for the system.
15) I hope I saved your research/development time
:-)
1) Yes, human knowledge base is limited.
2) Children's knowledge is limited even more.
That's why our children's intelligence is essentially weaker than adults' intelligence.
3) ARTCOM's knowledge would be EXTREMELY limited (because of very poor communication channel).
That's why ARTCOM's general intelligence would be EXTREMELY weak.
4) If you suggest to use new language for communication with ARTCOM then it's not user friendly already.
What is more important: knowledge base on the internet is practically not available for ARTCOME.
5) Yes, ability to understand available data is critical.
That's exactly the direction to dig in the General AI research.
And this is directly relates to natural language reading problem.
6) You can measure my intelligence even if I have no external tools. I still have knowledge database in my head.
What is important here: my intelligence with external tools (Google/Internet/Other experts/...) would be essentially higher than my intelligence without external tools.
BTW, John Searle did big mistake in his Chinese Room Argument.
Student with dictionary is different system than student without dictionary.
No wonder that "student with dictionary system" can speak Chinese, but "student without dictionary system" cannot.
Intelligence level of these two systems also differs.
7) Difference between "human search" and "intelligent calculator" is huge:
Humans already have huge knowledge base of possible solutions. Current problem activates the most relevant solution in the knowledge base.
Then human tries to apply these most relevant solutions and check the results. Solution which brings the best result is selected.
(The quality of the result is evaluated against human's goals).
In addition --- this selected solution, relevant information, and relations between problem and solution are added to the knowledge base for future use.
"Intelligent calculator" behaves in different way. Calculator doesn't use knowledge base, because it doesn't have solutions knowledge base.
Calculator doesn't have the ways to find the solution in the knowledge database either.
Calculator just applies some calculations to input data and returns the result.
You are building calculator.
Yes, you are going to implement small database based on past experience with user stories.
But the key to efficient general intelligence is HUGE database, not small one.
8) If you cover essential amount of concepts and relations between concepts that you will quickly get big database.
Why do you think it would be small?
It would be small only if your input channel is inefficient (like special language which wasn't used before).
9) Little data in knowledge base is absolutely not enough for general intelligence!
It could be little data in the question, but knowledge base has to be HUGE.
10) Since your system with small knowledge base would be inefficient --- nobody would put data in your database. Therefore the project would die.
11) Contrary to General Intelligence, HTML prototype perfectly works with one page. That's why some people learned HTML.
But even in case of HTML it took years before HTML became popular.
12) Your special language has other disadvantages aside of "nobody use it" problem.
It is less efficient than natural languages in supporting "General Intelligence Thought Process" and "General Topic Conversations".
13) If my long term memory doesn't accept any new knowledge then:
I still will be able to apply solutions from my huge database to new problems which are similar to old problems.
But this would be possible only because I already have HUGE knowledge base.
Worst part in "read-only memory" is that deliberation would be impossible.
Adapting to the changes in the world would be impossible.
Improving solution solving skills would be impossible.
Too many problems. Even with HUGE knowledge base.
Without HUGE knowledge base already in place there would be practically no intelligence.
14) If you want to keep things as simple as possible --- don't invent your own language. This is not just useless it's harmful for the system.
15) I hope I saved your research/development time
:-)
Subscribe to:
Posts (Atom)