Will AI result from the pure mass accumulation of knowledge? Or is there a specific form of program that is required?
I think for AI to result, it's got to have the ability to rewrite aspects of itself at will. What if Google creates such a program, and it rewrites itself to the point where it becomes self-aware, but Google still imposes limits as far as what it can and can't change. It might learn to break the rules and change on its own, or Google might decide to set it loose. At that point, what would it become? Benevolent? Destructive?
What if the first known AI is really a non-aware program that appears to so thoroughly be sentient, it fools everyone? I think it would be possible to fool someone into thinking an "AI" was real, even today. I bet something like this will happen in the near future.
Tuesday, December 11, 2007
AI From Google? False Alarm First?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment