Friday, February 25, 2005

Finding relevant answer in the question context

For some reason Jiri thinks that providing probable answers ordered by relevance wouldn't work good:

> 1) You will display "Top N" answers (in order to not overwhelm user)
> but the right answer might be in N+ because the quantity based "order
> by" will be invalid. Things are changing. An old info (which is
> incorrect today) can easily have more instances in the collected data.

That's why relations are constantly updating.
If wrong answer popped up then it will be applied. This would cause problems. Then relations to this answer would be updated to less desirable.


> People deal with unique scenarios all the time.

Scenarios may be unique, but components of scenarios are not unique at all.
AI would divide scenarios to concepts (words, phrases, and optionally abstract concepts). Then experience regarding all these concepts would be summarized --- relevant concepts would be activated.

> I really do not think we need an AI searching for "average" answers in
> what we wrote. That's just useless.

You are wrong.
Google has huge profit in the business of answering simple and average questions.

> 3) If I'm gonna ask your AI something about Mr. Smith, how does it
> know what Smith I'm talking about. How could I clarify that when
> talking with your AI?

From the context of your question. You would probably put some info about Mr.Smith, right?
All these words, phrases, and optionally abstract concepts would be used for answer search.

> Let's say it's clarified in question #1 and I got an answer, but now,
> I want to ask one more question about Mr. Smith. I have to clarify who
> he is again (assuming it's possible), right?

Short memory would help in this situation.
AI parsed your question to concepts. These concepts are stored into the short memory. Gradually all these concepts would be pushed out of short memory by new concepts, but this "pushing out" process wouldn't happen momentarily --- for some time original concepts (related to Mr. Smith) would be preserved in the short memory. The most relevant (to the Mr. Smith topic) concepts would stay in the short memory even longer.


> Questions and relevant answers are often not together and when they
> are then there is often some "blah blah blah" between, causing your AI
> to display the useless "blah blah blah" instead of the answer.

Why do you assume that my AI would search for the web pages in Question/Answer format only?

Any text would work.

Here are two possible implementations of answer search:

1) "Limited AI" implementation of answer search
Web pages with answers related to user's question could be found by concept match between "question concept list" and "answer concept lists".

2) Strong AI implementation of answer search
Question concept list would generate sequence of softcoded routines (read: flexible routines configured by AI itself), which will do whatever is necessary to find the answer. Possible routines could include search on Google, reading, chatting, emailing, and combination of all this stuff with various parameters, etc...

AI output --- response in Natural Language

Jiri> how exactly you want to generate the response sentences?

There are two approaches to generate the answer:
1) Simple approach (for limited AI)
Just copy:
- content of the most relevant page
- reference to this page
(like Google does).

2) Writing text (for strong AI)
When answer is prepared in short memory (in the form of answer concept list) then it should be converted into Natural Language text.
AI already has relations between words and concepts, so we can prepare NL text. The text wouldn't be nice to read, but it would be in a natural language already.

In order to make text output better AI has to remember typical flow of natural language. Such information could be stored in TextPair table.

Information is gathered into TextPair table during massive reading.
Basically TextPair table would have statistical information about typical language constructions.

See also: Writer Prototype

Other things which could improve writing:
1) Phrase concepts could be converted into text too.
2) Output sentences should be kept short. Translate one abstract concept into one sentence would be a good idea.
3) While looking through Pair table, search for synonyms as a substitution for original concepts.
4) The best feature, but the hardest to implement:
Use softcoded routines to generate the text --- for every concept find softcoded routine which relates to both this concept and "writing text" module.
These softcoded routines would output into actual text.
Obviously these softcoded routines should be prepared prior to text generation. It could be done by two standard strong AI learning techniques: "knowledge download" and "experiment".
For example, during experiment successful softcoded routines would be adopted/reinforced. Not efficient softcoded routines would be erased.


> If it involves connecting parts of sentences from various regions of
> data based on statistics then it will often generate garbage.

You are wrong.
Even pretty dumb Elisa text generation algorithm works acceptable.
Why would more efficient algorithm work worse?

Thursday, February 24, 2005

Strong AI: finding cause and effect

Jiri,
You claim that my strong AI design wouldn't be able to handle cause-effect relations. But the whole memory structure was designed exactly for the purpose of finding these cause-effect relations.

Some history
Originally I put into main memory design two types of relations:
1) Cause-effect relations.
2) Parent-child relations.
But later on I decided that system would be simpler and still work efficiently if I keep only one type of relations between concepts: cause-effect relations.

Back to current design
Strong AI design assumes that the main memory would keep millions of concepts connected by hundreds of millions cause-effect relations.

With such memory it would be easy to find the cause(s) for any specified effect(s).
It's also easy to find the effect(s) for any specified cause(s).

You next question probably would be: "how can we put all these millions of cause-effect relations into the main memory?".

One word answer would be: "Learning".

Short answer would be: "Read experiment and/or event correlation analyzer articles".

If you don't have time to read "Learning", "Experiment", and "Event Correlation Analyzer" read at least this simplified example:
-----
AI sends message: "Hi, dude".
AI receives message: "Hello".
Event correlation analyzer adds cause-effect relations between concepts "Hi, dude" and "Hello".
-----

You can find full version of this example on experiment page.

Emotions in Strong AI

> Are emotions part of the "main functionality"?

Yes.
Emotions are the part of the core AI functionality.

But in order to correctly understand my answer you need to understand what I understand under emotions.

Emotion is kind of advanced reflex. Typically emotion consists of a group of reflexes working together. There could be many reflexes in a single emotion. That's why it's hard to predict emotion even if you know behavior of every reflex. The problem of prediction of emotional response is actually worse because usually observer doesn't know what reflexes affect emotional result).

In other hand, it is not that hard to calculate result of emotion inside of the AI system.
It just takes a bunch of straightforward calculations.
This calculations are really simple.

Example:
Let assume that reflex1 (softcoded routine) activates concept e1 if concept c1 is activated.
...
Let assume that reflexN activates concept eN if concept c1 is activated.
("c" states for "cause" and "e" stands for "effect").

Whole emotion would activate concepts e1 ... eN.
These concepts e1 ... eN represent the emotional response of AI.

Wednesday, February 23, 2005

Simple AI as a necessary prototype for complex AI

Jiri,

> I would not have any problem with (AI's) hardcoded goals if they are
> guaranteed to stay fully compatible with our goals.

Nobody can give such guarantee.
For instance, desire to protect their families was a component of motivation of suicide pilots who crashed into Twin Towers in NYC in Sep 2001.

>Bottom line:
>In order to make AI to achieve such "high level goals",
>operator/admin has to carefully design set of "simple goals".

Having enough data, AI can generate all the needed sub-goals and solutions.

Nope.
Without assistance (in form of goals) it's practically impossible to learn.
Without learning it's practically impossible to achieve high level goals.

Without sexual instinct reproduction is practically impossible.

> Optionally, admin can specify rules which cannot be broken during the
> problem solving process.

Problem solving process is too delicate to give it to an admin.
Solving process should be implemented by developer under strict architect supervision.

>>It is impossible to educate without "desire to learn"
>> (read: "learning hardcoded goals") already implemented in AI.

> Not sure if I understand correctly. Assuming I do, I would say it
> applies to people, not to AI.

It applies to any intelligent learning system.


> The AI needs to be able to generate customized models for particular
> problem scenarios. The same question can be asked under different
> scenarios and the correct answers might be different or even contrary.

Different scenario means that this different scenario will be mentioned in the question.
In case if different scenario is mentioned in the question --- simple AI would generate different answer.

> That's one of the reasons why your AI cannot work well. Another one is
> that it cannot generalize.

Generalization is different feature. It could be implemented later.
BTW, most humans don't generalize well.
They can borrow generalizations from other people, but typically don't create their own generalizations.
Simple AI will be able to borrow generalization from NL text.

Bottom line: "Generalization ability is not core AI feature".


>>BTW, these "logically advanced" humans are not necessarily the most
>>successful ones :-)

>Right.. Success takes some luck..

This is not about luck.
Strong communication skills and efficient set of goals are far more important for intelligence than advanced logical skills.


> The most basic demo might be doable in a few days.

Nope :-(

> The parser which does the inserts should be relatively easy to do.

Correct, I successfully implemented it.
But this is not full demo. Therefore there is nothing to show / experiment with.

> Put the sentence-parts into a single table as you have originally planned.
> Let it learn from locally stored text files...

This learning part takes longer time to develop.

>>If I need 1 digit precision, then my AI need just remember few hundred
>>combinations

> There is an infinite number of combinations.

With 10 digits???

> It's terribly limited if it cannot do calculation it did not observed.

Intelligence is possible without ability to count.
It's proved by history.

>>Also my AI would use special math functions for calculations :-)

> Good, you are getting there ;-)..

Well, NL text has to be processed first. After that needs for calculations should be identified. Then parameters should be prepared and passed to the math functions.
For me it's obvious that AI can work without math, but cannot work without NL processing.


>>> The system needs to understand symbols "2", "+", "4", "=" separately.

>>Yes, but in a limited way.
>>Concept "2" may have relations with "1 + 1", "0 + 2", and "1 * 2".
>>"=" may be associated with internal math calculator. And with "2*2 = 4".
>>Etc.

> Crazy ;-)..

Sorry, but that's how our minds work.


> Here you go. Do not waste time with lots of coding. Google is your AI.
> The problem is that you would need a lot more magic than some synonyms
> from webster to turn it into a clever AI.

I cannot update Google's links' weights.
That's why I cannot just play with Google.

Limited AI

As my first AI prototype, I'm going to implement AI with the limited set of features.
Such "limited AI" (or "simple AI") project should be relatively easy to implement.
"Limited AI" project should have its own business sense.
Features from this "Limited AI" should be useful for "Full AI" ("Complex AI" / "Strong AI").

Here are these "Limited AI" features:

1) Memory in form of Neural Net:
Graph with concepts as a nodes and relations as an edges.

2) Natural Language processing.
Natural language is converted into Concepts. Appropriate relations are created.

3) Learning from Feedback.
Based on feedback from users/experts relations between Concepts are updated.
Feedback User Interface should be implemented in easy-to-use form.
"Learning from Feedback" requires implementation of a simple prototype of Motivation System
"Learning from Feedback" has limited learning ability.

4) Forgetting.
Relations are getting weaker with time (unless learning happens).
Very weak relations are deleted completely from the system.
Same forgetting mechanism can be applied to concepts.



What is not included into "limited AI":
1) Set of hardcoded goals (full "Motivation System").
2) "Self-programming" (Programmator, softcoded routines)

Tuesday, February 22, 2005

Simple AI as a necessary prototype for complex AI

Jiri,

> Call it goal or "attraction point", I would not recommend to hardcode it.

Sometimes it's easier to code, than to implement "goal designer" for administrator.

> There should be some sort of Add/Edit/Delete mode for that (possibly
> for Admin(s) only).

You can implement one of hardcoded goals in form of "obey to administrator".
This is still the option.

> But I think you should be able to describe the source scenario, the
> target scenario and optionally some rules which cannot be broken. Then
> The AI should generate solution steps (assuming it has relevant
> resources).

- Babies don't understand "source scenario", but they are still can learn. Why AI cannot be the same?
- I think you're still missing the point of what the hardcoded goals are.
Goal is not final point in (self)development.
Humans have hardcoded goals, but they don't have hardcoded goals like "become a president of the US" or "earn $1 billion". These two examples are softcoded goals.

Bottom line:
In order to make AI to achieve such "high level goals", operator/admin has to carefully design set of "simple goals".

Most probably "simple goals" and "high level goals" would be different.


>> Goals provide direction of learning/self development.

> no need for "hardcode". Editable = better.

It is impossible to educate without "desire to learn" (read: "learning hardcoded goals") already implemented in AI.



> you need to implement imagination in order to develop
> decent AI. I mean the AI needs to be able to generate some sort
> of model of the scene it thinks about. Not necessarily a 3D simulation
> but some type of model it could play with in it's mind.

Model - yes.
Visual model - not necessarily (hint: blind people are still intelligent).

Actually whole memory structure is designed for building models (concepts and relations between concepts).


>> Hardcoded goals should evaluate feedback and make conclusions.
>> Not necessarily logical conclusions. More like emotional conclusions.

> I think everything should be logical and the AI should be able to explain > the logic whenever requested by an authorized user..

It's nice to have such ability, but... not necessarily.
Ability to logically explain the logic is coming with experience, education, a lot of thinking, conversations, and time.
Children mostly cannot logically explain why they behave in a certain way. But they still learn.

Adults have limited ability to logically explain why they behave in a certain way.

Only the most logically advanced humans can logically explain almost everything.
BTW, these "logically advanced" humans are not necessarily the most successful ones :-)

Bottom line: logical explanation ability is not a core AI feature.


> I think when you move to the complex problem solving, you will find out
> that the basic features you are playing with now are not so useful..
> When do you think you will be ready for the complex AI?

Not soon :-(

I need to implement simple AI first. It also takes a lot of time --- you know.

But what I know for sure --- if I try to implement complex AI (strong AI) as my first AI prototype - I would definitely fail.

Agree?

> I think you need a demo to see the problem.
> Why don't you code it?

Time. Development always takes a lot of time. Especially Research and Development.


>> Majority of humans' decisions are BASED on this statistical factor.
>> This majority consists of very simple problems/decisions though. (Like
>> if I see "2 + 2" then I remember "4").

> It's funny you have used this example..
> The world of math alone is a killer for your AI.

It's not exactly a math. It's just remembering right answer in a particular case.

> You cannot store all that info.

Why not?
If I need 1 digit precision, then my AI needs to remember just few hundred combinations like:
==========
1 + 1 = 2
1 + 2 = 3
...
9 + 9 = 18
...
9 * 9 = 81
...
==========

Also my AI would use special math functions for calculations :-)

> The system needs to understand symbols "2", "+", "4", "=" separately.

Yes, but in a limited way.
Concept "2" may have relations with "1 + 1", "0 + 2", and "1 * 2".
"=" may be associated with internal math calculator. And with "2*2 = 4".
Etc.

Well, anyway, all these stuff is not for the nearest prototypes
:-)

>> More complex decision making would be unavailable for this "statistical"
>> approach. BUT(!) --- this "statistical" approach would help to quickly
>> find limited set of possible solutions.

> Does not sound like an interesting AI to me.

I think that "simple AI features set" is:
#1 - required for simple AI implementation.
#2 - sufficient for simple AI implementation.
#3 - required for complex AI implementation.
#4 - not sufficient for complex AI implementation.

What statements (#1, #2, #3, #4) do you agree/disagree with?

>> Keep in mind, that more complex algorithms are too slow and cannot solve
>> the problem without simple "statistical" algorithm.

> Yes, but not with your type of "statistical" algorithm.
> The system needs to be able to work with "formulas"
> and parameter-variables, not just remembering "formula"-instances.
> with particular parameter-instances without being able to
> automatically reuse the "formulas" using various parameter values.

Most of the humans are not able to work with formulas.
They are still intelligent though.

> For example, my AI can learn the Pythagoras Theorem: a^2 + b^2 = c^2.
> Then it can use it for triangles of all sizes.

How would you reuse this math ability in decision making process like: "finding electrical power provider in my neighborhood"?

I think your algorithm would be not reusable at all.


> Your AI (as I understand it) can solve related question only
> if it finds an example with the particular numbers in it's memory.

You understand it almost right.
The only correction is that: "AI would be use external knowledge, like Google or other intelligent experts".

> It cannot handle the general way of thinking.

Searching in internal/external memory is 90% of general way of thinking.
Another 9% is results evaluation against set of goals (both hardcoded and softcoded).

And another 1% is inventions. This 1% is:
- not necessary.
- is impossible without "memory search" and "results evaluation".

> That's just terribly limited/inefficient. I said "formula" and I used a
> math example but this applies to all kinds of processes the AI needs to
> understand in order to solve something.

What is your structure of memory which would be flexible enough to keep heterogeneous information?
How functionality reuse would be implemented in your memory structure?

Thursday, February 17, 2005

Goals and decision making

Keep in mind, that goal is not something ultimate. Goal is more like attraction point.
There could be several attraction points.
They shouldn't conflict with each other.
But they could compete with each other. Or quite contrary --- help to each other.
Goals provide direction of learning/self development.
> If the feedback is also NL then it's not very clear to me how you can
> increase understanding to the input.
Feedback could be in different form.
For instance, "satisfaction signal".
Another option --- NL. But special NL parser should be able to extract key words from NL and transform them into "satisfaction signal".

> other thing is that the AI is IMO not supposed to evaluate goals. It
> should be evaluating solutions.
Correct. AI should not evaluate hardcoded goals.
Hardcoded goals should evaluate feedback and make conclusions.
Not necessarily logical conclusions. More like emotional conclusions.
> I do not understand how you want to get complex problem solving
> working. That requires various types of reasoning.
I'm thinking about implementation of simple problem solving.
You are right --- complex problem solving requires more features.
I think basic features have to be implemented first.
Basic features would help to implement simple problem solving.
> Even if you combine ALL the words in all possible ways
Not in all possible combinations, but in "used combination".
> and if you have
> all that statistically sorted based on how often various combinations
> go together, it will be extremely poor problem solver because majority
> of solutions are just not based on that kind of statistical factor
Majority of humans' decisions are BASED on this statistical factor.
This majority consists of very simple problems/decisions though. (Like if I see "2 + 2" then I remember "4").
More complex decision making would be unavailable for this "statistical" approach. BUT(!) --- this "statistical" approach would help to quickly find limited set of possible solutions. And then more complex decision making algorithms would select right answer.
Keep in mind, that more complex algorithms are too slow and cannot solve the problem without simple "statistical" algorithm.
NOTICE: This email is intended solely for the use of the individual to whom it is addressed and may contain information that is privileged, confidential or otherwise exempt from disclosure. If the reader of this email is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please immediately notify us by telephone and return the original message to us at the listed email address. Thank You.

Saturday, February 12, 2005

Re: [agi] Cell

Artificial General Intelligence maillist archive
AGI maillist archive

Thursday, February 10, 2005

Brad Wyble

Brad Wyble
Email: B.Wyble@kent.ac.uk
Address: Computing Laboratory
University of Kent
Canterbury
Kent, CT2 7NF
Telephone: +44 (0)1227 827553 (direct line)
Facsimile: +44 (0)1227 762811
=================

Imagine,... you've got your 10^6 CPU's and you want to make
an AI. You have to devote some percentage of those CPU's to "thinking"
(ie analyzing and representing information) and the remainder to
restricting that thinking to some useful task. No one would argue, I
hope, that it's useful to blindly analyze all available information.

The part that's directing your resources is the control architechture and
it requires meticulous engineering and difficult design decisions.
What percentage do you allocate?

5%? 20%? The more you spend, the more efficiently the remaining CPU
power is spent. There's got to be a point at which you achieve a maximum
efficiency for your blob of silicon.

The brain is thoroughly riddled with such control architechture, starting
at the retina and moving back, it's a constant process of throwing out
information and compressing what's left into a more compact form. That's
really all your brain is doing from the moment a photon hits your eye,
determining whether or not you should ignore that photon. And it is a
Very Hard problem.

================
I used to think AGI was
practically a done deal. I figured we were 20 years out.

7 years in Neuroscience boot-camp changed that for good. I think anyone
who's truly serious about AI should spend some time studying at least one
system of the brain. And I mean really drill down into the primary
literature, don't just settle for the stuff on the surface which paints
nice rosy pictures.

Delve down to network anatomy, let your mind be blown by the precision and
complexity of the connectivity patterns.

Then delve down to cellular anatomy, come to understand how tightly
compact and well engineered our 300 billion CPUs are. Layers and layers
of feedback regulation interwoven with an exquisite perfection, both
within cells and between cells. What we don't know yet is truly
staggering.

I guarantee this research will permanently expand your mind.

Your idea of what a "Hard" problem is will ratchet up a few notches, and
you will never again look upon any significant slice of the AGI pie as
something simple enough that it can can be done by GA running on a few kg
of molecular switches.

Wednesday, February 02, 2005

http://en.wikipedia.org/wiki/Natural_language_processing

Some problems which make NLP difficult
Word boundary detection
In spoken language, there are usually no gaps between words; where to place the word boundary often depends on what choice makes the most sense grammatically and given the context. In written form, languages like Chinese do not signal word boundaries either.
Word sense disambiguation
Any given word can have several different meanings; we have to select the meaning which makes the most sense in context.
Syntactic ambiguity
The grammar for natural languages is not unambiguous, i.e. there are often multiple possible parse trees for a given sentence. Choosing the most appropriate one usually requires semantic and contextual information.
Imperfect or irregular input
Foreign or regional accents and vocal impediments in speech; typing or grammatical errors, OCR errors in texts.
Speech acts and plans
Sentences often don't mean what they literally say; for instance a good answer to "Can you pass the salt" is to pass the salt; in most contexts "Yes" is not a good answer, although "No" is better and "I'm afraid that I can't see it" is better yet. Or again, if a class was not offered last year, "The class was not offered last year" is a better answer to the question "How many students failed the class last year?" than "None" is.