Artificial intelligence has long been promising to disrupt industries, revolutionize economies, and (generally speaking) transform the world as we know it. While many people have dismissed these claims as the work of speculative science fiction, advancing technologies bring them closer and closer to reality every day. The promises of AI are real, and smaller versions of those disruptions and revolutions hint to a tipping point in the near future. However, before the promise of AI can be fully realized, there are many challenges and barriers that have yet to be addressed.
Get our Updates as They Happen
Subscribe to our blog to get new posts delivered straight to your inbox.
Challenge 1: The “Black Box”
The most efficient and effective applications of AI put the technology in tandem with the efforts of human beings. AI-human partnerships are full of truly revolutionary potential. As people use machine learning and automate time-consuming processes to free up precious time for analytics and depth, we can see successful collaboration between machines and humans in areas like medical diagnosis, language translation, and customer service.
However, these kinds of partnerships are only successful when human beings actually use the AI technology. Too often, their own misgivings about the technology cause them to override AI-generated information at the time when it could be most useful. For instance, studies have found that physicians are happy to use AI-generated information to back up and support their own diagnoses, but when the machines go against their intuition, they tend to reject the findings as errors.
The reason for this rejection is rooted in the “black box” phenomenon. People cannot understand how the inner workings of AI operate, so they don’t trust the outcomes. More needs to be done to build trust in AI’s conclusions and processes so that the human beings in charge of making the decisions actually use the information AI makes available to them to their full potential.
Challenge 2: Narrow Thinking and Application
While real, meaningful progress is being made in the AI field on a daily basis, most of the breakthroughs have been in applied AI rather than general AI. The difference between the two is crucial.
Applied AI refers to the replication of human intelligence for a very specific purpose. This is the AI that most people recognize from the world around them. A chatbot replaces the customer service representative, but it is programmed to provide a narrow set of responses. Law enforcement is using AI software to analyze video surveillance and better detect evidence of crimes. In these instances, the technology is designed to replicate a very specific human task.
General AI is the creation of machines that go beyond any one specific human task. These technologies would be capable of creating novel applications for their knowledge and datasets, and they would generate their own uses and purposes. When the world of science fiction imagines a race of robots operating alongside human beings with their own autonomous motivations and intentions, they’re foreseeing a highly-developed general AI.
To date, though, the promise of general AI is getting far less progress than that of applied AI. Without general AI, we are likely to see faster, more effective, and safer versions of tasks previously delegated to humans, but we are unlikely to see truly revolutionized ideas and systems since we are basically keeping operations the same with upgraded efficiency.
Looking to the Future
The future of AI is still bright, and experts are convinced the AI-fueled revolutions that have long been promised are still on the horizon. However, the near-daily headlines about the progress and the excitement they generate should be tempered with an understanding that we are still a long way away from truly transformative application of these technologies. Our own human perspectives are limiting the way we put these technologies to use, and until we find a way to overcome our own biases, we’ll likely use AI to be more efficient but not radically different.
As Director of Enterprise Analytics, James helped Thomson Reuters establish data management capabilities and an enterprise-wide analytics competency.
Latest posts by James Nanscawen (see all)
- Business Glossaries and Data Dictionaries: What’s the Difference and Why Should I Care? - February 18, 2020
- Best Practices for a Business Glossary - January 14, 2020
- See It to Believe It: Why Humans Need Data Visualization - December 10, 2019