The basic definition of "intelligence" is "the ability to acquire and apply knowledge and skills."
A myriad of life forms can do this. Humans appear to have become better at it than others.
In the 20th Century we created language oriented machines - computers - to support and supplement our intelligence. And predictably in the beginning of the 21st Century those machines have become sophisticated enough that we now call it "artificial intelligence" abbreviated as "AI".
As noted in a recent Scientific American article:
No one yet knows how ChatGPT and its artificial intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems’ abilities go far beyond what they were trained to do—and even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, though the machines’ technique is different.
The problem is this is correctly defined as both wondrous and dangerous. (Read the article if it is a concern to you.)
The suggested solution to the dangerous side is to have Congress pass some regulations. That sound you hear is this writer groaning.... For the fledgling AI systems already out there can and will outperform human legislative "group-think" because that's what their designed to do and can already do.
In his 1950 book I, Robot Isaac Asimov offered up a possibility for guiding the behavior of AI:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Note that word "guiding" not "restricting" is used. Keep in mind that AI has already evolved beyond what the humans who designed it intended. The best we can hope for is that those three fundamental Laws of Robotics would be built into the basics of AI.
But we may already failed at that.