Predictions for computing in 2017
Last year was immense for advancements in computing and machine learning. however, 2017 can deliver even additional. Here ar 5 key things to appear forward to.
AlphaGo’s historic ending against one amongst the simplest Go players of all time, Lee Sedol, was a landmark for the sector of AI, and particularly for the technique referred to as deep reinforcement learning.
Reinforcement learning takes inspiration from the ways in which animals find out how bound behaviors tend to lead to a positive or negative outcome. victimization this approach, a PC will say, make out a way to navigate a maze by trial and error so associate the positive outcome—exiting the maze—with the actions that crystal rectifier up thereto. This lets a machine learn while not instruction or maybe specific examples. the concept has been around for many years, however combining it with giant (or deep) neural networks provides the ability required to create it work on extremely advanced issues (like the sport of Go). Through relentless experimentation, likewise, as analysis of previous games, AlphaGo puzzled out for itself, however, play the sport at Associate in Nursing knowledgeable level.
The hope is that reinforcement learning can currently prove helpful in several real-world things. and also the recent unharness of many simulated environments ought to spur progress on the required algorithms by increasing the verity of skills computers will acquire this fashion.
In 2017, we tend to ar possible to examine makes an attempt to use reinforcement learning to issues like machine-driven driving and industrial AI. Google has already boasted of victimization deep reinforcement learning to create its information centers additional economical. however the approach remains experimental, and it still requires time-consuming simulation, so it’ll be interesting to see how effectively it can be deployed.
Dueling neural networks
At the banner AI academic gathering held recently in Barcelona, the Neural Information Processing Systems conference, much of the buzz was about a new machine-learning technique known as generative adversarial networks.
Invented by Ian Goodfellow, now a research scientist at OpenAI, generative adversarial networks, or GANs, are systems consisting of one network that generates new data after learning from a training set, and another that tries to discriminate between real and fake data. By working together, these networks can produce very realistic synthetic data. The approach could be used to generate video-game scenery, de-blur pixelated video footage, or apply stylistic changes to computer-generated designs.
Yoshua Bengio, one of the world’s leading experts on machine learning (and Goodfellow’s PhD advisor at the University of Montreal), said at NIPS that the approach is especially exciting because it offers a powerful way for computers to learn from unlabeled data—something many believe may hold the key to making computers a lot more intelligent in years to come.
China’s AI boom
This may also be the year in which China starts looking like a major player in the field of AI. The country’s tech industry is shifting away from copying Western companies, and it has identified AI and machine learning as the next big areas of innovation.
China’s leading search company, Baidu, has had an AI-focused lab for some time, and it is reaping the rewards in terms of improvements in technologies such as voice recognition and natural language processing, as well as a better-optimized advertising business. Other players are now scrambling to catch up. Tencent, which offers the hugely successful mobile-first messaging and networking app WeChat, opened an AI lab last year, and the company was busy recruiting talent at NIPS. Didi, the ride-sharing giant that bought Uber’s Chinese operations earlier this year, is also building out a lab and reportedly working on its own driverless cars.
Chinese investors are now pouring money into AI-focused startups, and the Chinese government has signaled a desire to see the country’s AI industry blossom, pledging to invest about $15 billion by 2018.
Ask AI researchers what their next big target is, and they are likely to mention language. The hope is that techniques that have produced spectacular progress in voice and image recognition, among other areas, may also help computers parse and generate language more effectively.
This is a long-standing goal in computing, and also the prospect of computers human activity and interacting with U.S. victimization language could be a fascinating one. higher language understanding would create machines an entire ton additional helpful. however, the challenge could be a formidable one, given the quality, subtlety, and power of language.
Don’t expect to induce into deep and pregnant spoken communication along with your smartphone for a moment. however some spectacular inroads are being created, and you’ll expect more advances during this space in 2017.
Backlash to the hype
As well as real advances and exciting new applications, 2016 saw the plug close computing reach exciting new heights. whereas several have religion within the underlying worth of technologies being developed these days, it’s laborious to flee the sensation that the subject matter close AI is obtaining a trifle out of hand.
Some AI researchers ar manifestly irritated. A launch party was organized throughout NIPS for a pretend AI startup referred to as Rocket AI, to focus on the growing mania and nonsense around real AI analysis. The deception wasn’t terribly convincing, however, it absolutely was a fun thanks to drawing attention to a real downside.
One real downside is that plug inevitably results in a way of disappointment once massive breakthroughs don’t happen, inflicting overvalued startups to fail and investment to dry up. maybe 2017 can feature some style of backlash against the AI hypemachine—and perhaps that wouldn’t be such a nasty issue.