David Sanders> I would like to see a section up on your site about the downsides of AIS and what preventative limits need to take place in research to ensure that AIS come out as the "good" part of humans and not the bad part. The military is already building robotic, self propelled and thinking vehicles with weapons.
Recipe for "safe from bad guys research" is the same as recipe for any
research: openness.
When ideas are available for society - many people (and later many
machines) would compete in implementation of these ideas. And society
(human society / machine society / or mixed society) - would setup
rules which would prevent major misuse of new technology.
David Sanders> How long do we really have before an AIS, demented or otherwise) decides to eliminate its maker?
Why would you care?
Some children kill their parents. Did our society collapsed because of
that?
Some AISes would be bad. Bad not just toward humans, but toward other
AISes.
But as usual --- bad guys wouldn't be a majority.
David Sanders> As countless science fiction stories have told us, even the most innocent of actions by an AIS may spell disaster,
1) These are fiction stories.
2) Some humans can cause disasters too, so what?
David Sanders> because like I said above the don't fundamentally understand us, and we don't understand them.
Why wouldn't AISes understand humans?
David Sanders> We will be two completely different species, and they might not hold the same sanctity of life most of us are born with.
Humans are not born with sanctity. Humans gain it (or not gain) while
they grow.
Same would apply to machines.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment