9 AI trends on our radar
How new developments in automation, machine deception, hardware, and more will shape AI.
Here are key AI trends business leaders and practitioners should watch in the months ahead.
We will start to see technologies enable partial automation of a variety of tasks.
Automation occurs in stages. While full automation might still be a ways off, there are many workflows and tasks that lend themselves to partial automation. In fact, McKinsey estimates that “fewer than 5% of occupations can be entirely automated using current technology. However, about 60% of occupations could have 30% or more of their constituent activities automated.”
We have already seen some interesting products and services that rely on computer vision and speech technologies, and we expect to see even more in 2019. Look for additional improvements in language models and robotics that will result in solutions that target text and physical tasks. Rather than waiting for a complete automation model, competition will drive organizations to implement partial automation solutions—and the success of those partial automation projects will spur further development.
AI in the enterprise will build upon existing analytic applications.
Companies have spent the last few years building processes and infrastructure to unlock disparate data sources in order to improve analytics on their most mission-critical analysis, whether it is business analytics, recommenders and personalization, forecasting, or anomaly detection and monitoring.
Aside from new systems that use vision and speech technologies, we expect early forays into deep learning and reinforcement learning will be in areas where companies already have data and machine learning in place. For example, companies are infusing their systems for temporal and geospatial data with deep learning, resulting in scalable and more accurate hybrid systems (i.e., systems that combine deep learning with other machine learning methods).
In an age of partial automation and human-in-the-loop solutions, UX/UI design will be critical.
Many current AI solutions work hand in hand with consumers, human workers, and domain experts. These systems improve the productivity of users and in many cases enable them to perform tasks at incredible scale and accuracy. Proper UX/UI design not only streamlines those tasks but also goes a long way toward getting users to trust and use AI solutions.
We will see specialized hardware for sensing, model training, and model inference.
The resurgence in deep learning began around 2011 with record-setting models in speech and computer vision. Today, there is certainly enough scale to justify specialized hardware—Facebook alone makes trillions of predictions per day. Google, too, has had enough scale to justify producing its own specialized hardware: it has been using its tensor processing units (TPUs) in its cloud since last year. 2019 should see a broader selection of specialized hardware begin to appear. Numerous companies and startups in China and the US have been working on hardware that targets model building and inference, both in the data center and on edge devices.
AI solutions will continue to rely on hybrid models.
While deep learning continues to drive a lot of interesting research, most end-to-end solutions are hybrid systems. In 2019, we’ll begin to hear more about the essential role of other components and methods—including model-based methods like Bayesian inference, tree search, evolution, knowledge graphs, simulation platforms, and many more. And we just might begin to see exciting developments in machine learning methods that aren’t based on neural networks.
AI successes will spur investments in new tools and processes.
We are in a highly empirical era for machine learning. Tools for ML development will need to account for the importance of data, experimentation and model search, and model deployment and monitoring. Take just one step of the process: model building. Companies are beginning to look into tools for data lineage, metadata management and analysis, efficient utilization of compute resources, efficient model search, and hyperparameter tuning. In 2019, expect many new tools to ease the development and actual deployment of AI and Ml to products and services.
Machine deception will remain a serious challenge.
In spite of a barrage of “fake” news, we’re still in the early days of machine-generated content (fake images, video, audio, and text). At least for now, detection and forensic technologies have been able to ferret out fake video and images. But the tools for generating fake content are improving quickly, so funding agencies in the US and elsewhere have initiated programs to make sure detection technologies keep up.
And machine deception does not just refer to machines deceiving humans; machines deceiving machines (bots) and people deceiving machines (troll armies and click farms) can be just as difficult to deal with. Information propagation methods and click farms will continue to be used to fool ranking systems on content and retail platforms, and methods to detect and combat this will have to be developed as fast as new forms of machine deception are launched.
Reliability and safety will take center stage.
It’s been heartening to see researchers and practitioners become seriously interested and engaged in issues pertaining to privacy, fairness, and ethics. But as AI systems become deployed in mission-critical applications—and even life and death scenarios involving applications such as autonomous vehicles or healthcare—improved efficiency from automation will need to come with safety and reliability measurements and guarantees. The rise of machine deception in online platforms, as well as recent accidents involving autonomous vehicles, has cracked this issue wide open. In 2019, expect to hear safety discussed more intensively.
Democratizing access to large training data will level the playing field.
Because many of the models we rely on—including deep learning and reinforcement learning— are data hungry, the anticipated winners in the field of AI have been huge companies or countries with access to massive amounts of data. But services for generating labeled datasets (specifically companies that rely on human labelers) are beginning to use machine learning tools to help their human workers scale and improve their accuracy. And in certain domains, new tools like generative adversarial networks (GAN) and simulation platforms are able to provide realistic synthetic data, which can be used to train machine learning models. Finally, a new crop of secure and privacy-preserving technologies that facilitate sharing of data across organizations are helping companies take advantage of data they didn’t generate. Together, these developments will help smaller organizations compete using machine learning and AI.