Friday, February 25, 2005

Finding relevant answer in the question context

For some reason Jiri thinks that providing probable answers ordered by relevance wouldn't work good:

> 1) You will display "Top N" answers (in order to not overwhelm user)
> but the right answer might be in N+ because the quantity based "order
> by" will be invalid. Things are changing. An old info (which is
> incorrect today) can easily have more instances in the collected data.

That's why relations are constantly updating.
If wrong answer popped up then it will be applied. This would cause problems. Then relations to this answer would be updated to less desirable.


> People deal with unique scenarios all the time.

Scenarios may be unique, but components of scenarios are not unique at all.
AI would divide scenarios to concepts (words, phrases, and optionally abstract concepts). Then experience regarding all these concepts would be summarized --- relevant concepts would be activated.

> I really do not think we need an AI searching for "average" answers in
> what we wrote. That's just useless.

You are wrong.
Google has huge profit in the business of answering simple and average questions.

> 3) If I'm gonna ask your AI something about Mr. Smith, how does it
> know what Smith I'm talking about. How could I clarify that when
> talking with your AI?

From the context of your question. You would probably put some info about Mr.Smith, right?
All these words, phrases, and optionally abstract concepts would be used for answer search.

> Let's say it's clarified in question #1 and I got an answer, but now,
> I want to ask one more question about Mr. Smith. I have to clarify who
> he is again (assuming it's possible), right?

Short memory would help in this situation.
AI parsed your question to concepts. These concepts are stored into the short memory. Gradually all these concepts would be pushed out of short memory by new concepts, but this "pushing out" process wouldn't happen momentarily --- for some time original concepts (related to Mr. Smith) would be preserved in the short memory. The most relevant (to the Mr. Smith topic) concepts would stay in the short memory even longer.


> Questions and relevant answers are often not together and when they
> are then there is often some "blah blah blah" between, causing your AI
> to display the useless "blah blah blah" instead of the answer.

Why do you assume that my AI would search for the web pages in Question/Answer format only?

Any text would work.

Here are two possible implementations of answer search:

1) "Limited AI" implementation of answer search
Web pages with answers related to user's question could be found by concept match between "question concept list" and "answer concept lists".

2) Strong AI implementation of answer search
Question concept list would generate sequence of softcoded routines (read: flexible routines configured by AI itself), which will do whatever is necessary to find the answer. Possible routines could include search on Google, reading, chatting, emailing, and combination of all this stuff with various parameters, etc...

No comments: