Why ‘Human-Like’ Is A Low Bar For Most AI Projects
There has been a lot of market offerings for “human-like” machines recently. From what I have seen, most of them are a let-down. The AI market is expected to easily eclipse $300 billion by 2025. And most of the companies trying to cash in on this craze are marketing any of their AI driving technologies to have “human-like” responses. Perhaps it is time to rethink this approach.
Why is AI not Human-like?
The big idea is that human-like AI is an upgrade. Computers can be complicated and require complicated instructions, but AI can learn. Unfortunately, humans are not great at the kinds of tasks a computer excels at and AI is not the best at the kinds of tasks that humans are. Therefore, researchers are trying to move away from development paradigms that focus on imitating human cognition. They have realized that the pursuit is fruitless.
A pair of New York University researchers recently did a deep dive into how humans and AI process words and word meaning. Throughout the study of “psychological semantics,” the couple hoped to expand the public’s understanding of the shortcomings held by machine learning systems in the natural language processing (NLP) region.
The sentiment they shared in the study was that AI researchers do not dwell on whether their models are human-like. If someone decided to develop an incredibly accurate machine translation system using various text annotation tools, possibly many would want it to look like a human.
The Field of Translation
In the field of Translation, humans have a range of approaches for keeping multiple languages in their brains and often fluidly jump between them. Machines, on the other hand, do not need to understand what a word means to correctly assign a translation to it. This gets a bit tricky when you get closer to human-level accuracy. Translating a few words into Spanish is simple enough. The machine learns that they are equivalent to uno, dos, and tres and is likely to get those correct 100% of the time. But when you add more variables like complex words and semantically broad words, with some colloquial speech thrown in things can get complicated.
The AI’s uncanny valley becomes more apparent when developers try to create translation algorithms that can handle pretty much everything. Much like taking a few Spanish classes will not teach a person all the slang they might end up hearing in Mexico City, AI struggles to keep up with an ever-changing human lexicon.
NLP is not capable of human-like cognition yet and making it exhibit these behaviors at this stage in the game is just silly. Imagine if Google Translate balked at a request because it found the word “moist” uncomfortable, for example.
Automation vs AI
From an engineering point of view, most human jobs can be broken down into individual tasks that would be better suited for automation than AI, and in cases where neural networks offer better utility – directing cars on a roach, for example – it is hard to think of a use-case where a general AI would outperform several narrow, task-specific systems.
There is not function a smart speaker performs that could not be better handled by a button. If you had infinite space and an overwhelming level of patience, you could use buttons for anything a smart speaker could do. It would be easy to implement, but perhaps not the most user-friendly? Alexa is basically a giant remote control.
AI is not like humans. This is a great thing. I believe that both humans and AI can bring a lot to the table by working together instead of totally replacing the human element. So, for the time being, human-level AI is not going to happen. Robot servants are still a distant future for now. Currently, all developers can do is imitate something like human interactions between AI and humans. But automation might be the more efficient route.