It happens like this: someone has a theory about what intelligence is and develops some software to implement it. Even if it does not work or doesn’t do anything that looks like intelligence, it is still considered “AI” because that is what they were aspiring to create.
Complex ideas are aggregates of simpler ones. The inescapable conclusion is that, if you keep decomposing ideas into their components, at some point you get to the end, or rather the beginning. This is the same conjecture that Democritus made about the material world: if you keep breaking things apart, eventually you get to the indivisible pieces he called “atoms.”
Today, the technical community, including Big Tech, government and academia have embraced artificial neural networks (ANNs) and Machine Learning (ML) as their preferred methodology. While this “data science” has led to many amazing applications that are impacting the way we live and work, misconceptions about what it is and its potential are widespread.
Some inventions, like our sapiens, come “out of left field,” the result of a series of unforeseeable influences and events, pieces of a puzzle that come together at a certain point in time independent of and often contrary to the technology mainstream.
The relationship between language and knowledge has fascinated philosophers since ancient times. One theory is that language is a prerequisite for knowledge and that knowledge cannot exist without it. We talk as though language contains knowledge. But a simple thought experiment proves otherwise.
New Sapience began with a simple thesis: the quickest way to create a thinking machine is to give it something to think about. The symbolic crowd was on the right track when they focused, not on emulating the human brain like the connectionists, but on the end product of human cognition: knowledge. But there was a fatal flaw in their approach: the symbols themselves.
“Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough.”
“What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory…”
David Deutsch, quantum computation physicist at the University of Oxford
Everybody knows what AI is. It is the same thing that makes humans smart – but in a computer. We don’t have to know how works because we know it when we see it. For instance, expressing your thoughts and recognizing that you have been understood.
Since its beginnings in the 1980s, the AI community has been rife with hyperbole and vague claims of programs that “think like humans,” but always without measurable results. Today New Sapience is using tools written for human students to assess sapiens comprehension.