Real Time Web Analytics
augmented reality | WHY ARTIFICIAL INTELLIGENCE MARKETING

5 Things To Watch In AI And Machine Learning

Five Things To Watch In AI And Machine Learning In 2017

 

Without a doubt, 2016 was an amazing year for Machine Learning (ML) and Artificial Intelligence (AI). During the year, we saw nearly every high tech CEO claim the mantel of becoming an “AI Company”. However, only a few companies were actually able to monetize their significant investments in AI, notably Amazon AMZN +0.71%, Baidu , Facebook FB +1.75%, Google GOOGL +3.77%, IBM IBM -0.02%, Microsoft MSFT +0.29%, Tesla Motors TSLA +1.75% and NVIDIA NVDA -1.29%. But 2016 was nonetheless a year of many firsts. As a posterchild for the potential for ML, Google Deep Mind mastered the subtle and infinitely complex game of GO, soundly beating the reigning world champion. And more than a few cool products were introduced that incorporated Machine Learning, from the first autonomous vehicles to new “intelligent” household assistants such as Google Home and Amazon Echo. But will 2017 finally usher in the long-promised age of Artificial Intelligence?

NVIDIA's Saturn V supercomputer for Machine Learning is the 28th fastest computer in the world, and is the #1 in the Green 500 list of the most power efficient. (Source: NVIDIA)

NVIDIA’s Saturn V supercomputer for Machine Learning is the 28th fastest computer in the world, and is the #1 in the Green 500 list of the most power efficient. (Source: NVIDIA)

Two domains: AI and Machine Learning. These terms are not interchangeable. Machine learning, a completely different way to program a computer by training it with a massive ocean of sample data, is real and is here to stay. General Artificial Intelligence remains a distant goal and is perhaps 5-20 years away depending on the specific domain of the “intelligence” being learned. To be sure, computers trained using Machine Learning hold tremendous promise, as well as the potential for massive disruption in the workplace. But these systems remain a far cry from genuine intelligence. Just ask Apple AAPL -0.14% Siri, and you will see what I mean. The hype around AI, and confusion over what the term actually means, will inevitably lead to some disillusionment as the limitations of this technology become apparent.

With that context in mind, here’s what I expect for the coming year for Machine Learning and AI.

1. Hardware accelerators for Machine Learning will proliferate.

Today, nearly all training of deep neural networks (DNNs) is performed using NVIDIA GPUs. Conversely, DNN inference, or the actual use of a trained network can be done efficiently on CPUs, GPUs, FPGAs, or even specialized ASICs such as the Google TPU, depending on the type of data being analyzed. Both training and inference markets will be hotly contested in 2017, as Advanced Micro Devices GPUs, Intel’s newly acquired Nervana chips, NVIDIA, Xilinx and several startups all launch accelerators specifically targeting this lucrative market. If you would like a deeper dive into the various semiconductor alternatives for AI, please see my companion article on this subject here.

2. Select application domains will leverage Machine Learning to improve efficiency of mission-critical processes.

If you are trying to find the killer AI app, the increasingly pervasive nature of the technology will make it difficult to identify. However, Machine Learning has begun to deliver spectacular results in very specific niches where the pattern recognition capabilities can be exploited, and this trend will continue to expand into new markets in 2017.

Need Leads? Get Red-Hot Leads in 24 Hours or Less

Need Leads? Get Red-Hot Leads in 24 Hours or Less!

 

 

LATEST SCIENCE & TECHNOLOGY NEWS **Kurzwel**

Monday | April 24, 2017
DAILY EDITION
LATEST SCIENCE & TECHNOLOGY NEWS

The first 2D microprocessor — based on a layer of just 3 atoms

April 24, 2017
Overview of the entire chip. AC=Accumulator, internal buffer; PC=Program Counter, points at the next instruction to be executed; IR=Instruction Register, used to buffer data- and instruction-bits received from the external memory; CU=Control Unit, orchestrates the other units according to the instruction to be executed; OR=Output Register, memory used to buffer output-data; ALU=Arithmetic Logic Unit, does the actual calculations. [3] (credit: TU Wien)May one day replace traditional microprocessor chips as well as open up new applications in flexible electronics

Researchers at Vienna University of Technology (known as TU Wien) in Vienna, Austria, have developed the world’s first two-dimensional microprocessor — the most complex 2D circuitry so far. Microprocessors based on atomically thin 2D materials promise to one day replace traditional microprocessors as well as open up new applications in flexible electronics. Consisting of 115 … more…

‘Negative mass’ created at Washington State University

April 21, 2017
Experimental images of an expanding spin-orbit superfluid Bose-Einstein condensate at different expansion times (credit: M. A. Khamehchi et al./Physical Review Letters)Washington State University (WSU) physicists have created a fluid with “negative mass,” which means that if you push it, it accelerates toward you instead of away, in apparent violation of Newton’s laws. The phenomenon can be used to explore some of the more challenging concepts of the cosmos, said Michael Forbes, PhD, a WSU assistant … more…

NEW EVENTS

london_futuristsWho can save Humanity from Superintelligence?

Dates: Apr 29, 2017
Location: London, UK
more…

ai-summit-logoThe AI Summit San Francisco

Dates: Sep 27 – 28, 2017
Location: San Francisco, California
more…

Visit KurzweilAI.net

6 Blockchain-based Digital ID Management Platforms

Blockchain

NEWS
6 Blockchain-based Digital ID Management Platforms to Keep an eye on

Blockchain technology can be used for many different purposes, including powering networks such as bitcoin. Digital identity management is an area …
Google PlusFacebookTwitterFlag as irrelevant
Why Blockchain May Be Your Next Supply Chain

Blockchain technology may be shaking up a supply chain near you. It’s smarter, it’s faster, and it gets more participants on board. In a recent piece at …
Google PlusFacebookTwitterFlag as irrelevant
European Commission Backs Blockchain Pilot With €500k Budget

The European Union’s executive branch is establishing an “observatory” focused on blockchain as part of a wider pilot project. Unveiled earlier this …
Google PlusFacebookTwitterFlag as irrelevant
Mike O’Donnell: Volkswagen, Blockchain, and the fear of disruption

Sadly, the Blockchain philosophy has had the opposite experience, with its first real market manifestation being a cryptocurrency called Bitcoin.
Google PlusFacebookTwitterFlag as irrelevant
Documentary Presents Accessible Intro to Impact of Blockchain Tech

Delving into how the blockchain ecosystem can shape our future is Swiss-born economist and filmmaker Manuel Stagars, who filmed the recently …
Google PlusFacebookTwitterFlag as irrelevant
Industries Gauge Blockchain’s Promise to Unlock New Economic Value

A few weeks ago I attended the IBM Blockchain Summit 2017 in New York. The Summit included presentations of concrete blockchain use cases in a …
Google PlusFacebookTwitterFlag as irrelevant
Delaware Law Amendments Would Facilitate Blockchain Maintenance of Corporate Records

In 2015, the then-Governor Jack Markell announced an initiative to embrace blockchain technology, which eventually saw the DSBA Corporation Law …
Google PlusFacebookTwitterFlag as irrelevant
Why Europe and China are Ahead of US in Blockchain Development

The US is falling behind China and Europe in blockchain development primarily due to the lack of regulatory frameworks. With a practical approach …
Google PlusFacebookTwitterFlag as irrelevant
OUTLIER VENTURES AND IMPERIAL COLLEGE’S BLOCKCHAIN LAB IC3RE SIGN 3 YEAR …

The Imperial College Centre for Cryptocurrency Research & Engineering (IC3RE) at Imperial College London has joined forces with blockchain …
Google PlusFacebookTwitterFlag as irrelevant
First European Blockchain for Healthcare conference #bc4hc on 31 May 2017 in Berlin

The first European Blockchain for Healthcare conference is targeted at C-level executives from the healthcare and health insurance industries as well …
Google PlusFacebookTwitterFlag as irrelevant
BLOGS
Blockchain Will Change the World. We Think.

Attendees at an MIT conference have big hopes for blockchain. But multiple technical and regulatory issues must be solved first.
Google PlusFacebookTwitterFlag as irrelevant
WEB
Dot Blockchain Music

An introduction to the overall technical architecture for Phase 2 of Dot Blockchain Music. Presented by Chris Tse, Tech Lead of the dotBC Open …
Google PlusFacebookTwitterFlag as irrelevant
From concept to production: Getting real with blockchain at Consensus

Enterprise leaders are reimagining their business processes with blockchain. Join Consensus 2017 to be part of the transformation in your industry.
Google PlusFacebookTwitterFlag as irrelevant
Tradable and scarce digital assets on the blockcha…

Could the blockchain be used to maintain a chain of custody on digital documents? If there were laws and regulations supporting this, I think it could …
Google PlusFacebookTwitterFlag as irrelevant
Why banking should stay well clear of blockchains

If the financial services industry is banking on blockchains as the basis for new service innovation, it will be sorely disappointed. Blockchains‘ design …
Google PlusFacebookTwitterFlag as irrelevant
SolarCoin, Blockchain and Utilities Industry Data Storage challenges

The Blockchain based cryptocurrency SolarCoin is set to revolutionize the electricity industry – and data storage.
Google PlusFacebookTwitterFlag as irrelevant
Blockchain and the wisdom of crowds

His excellent presentation on the difference between a blockchain and database and the niches that the former ought to fill stole the show for me, and …
Google PlusFacebookTwitterFlag as irrelevant
Senior QA Engineer – Blockchain!

Senior QA Engineer – Blockchain! in Belfast – Recruit NI, Northern Ireland’s Leading Recruitment Website.
Google PlusFacebookTwitterFlag as irrelevant
Humaniq at Blockchain conference in Moscow!

Two days ago, on Wednesday I attended the Blockchain and Bitcoin Conference in Moscow with Humaniq team.

Meet the scientists building digital ‘brains’ for your phone

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs


BRAIN ACTIVITY MAP
Neuroscape Lab

AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.

“Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.

Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.

What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.

Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.

SUBSCRIBE TO WIRED

This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.

Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.

Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.

“Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,” Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.

Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.

Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.

“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.

“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.

When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.

Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.

With the rise of neuromorphics, and tools like Nengo, we could soon have AI’s capable of exhibiting a stunning level of natural intelligence – right on our phones.

Heart of the Machine — THINKING MACHINES

Photo

CreditEleni Kalorkoti

THINKING MACHINES
The Quest for Artificial Intelligence — and Where It’s Taking Us Next
By Luke Dormehl
275 pp. TarcherPerigee. Paper, $16.

HEART OF THE MACHINE
Our Future in a World of Artificial Emotional Intelligence
By Richard Yonck
312 pp. Arcade Publishing. $25.99.

Books about science and especially computer science often suffer from one of two failure modes. Treatises by scientists sometimes fail to clearly communicate insights. Conversely, the work of journalists and other professional writers may exhibit a weak understanding of the science in the first place.

Luke Dormehl is the rare lay person — a journalist and filmmaker — who actually understands the science (and even the math) and is able to parse it in an edifying and exciting way. He is also a gifted storyteller who interweaves the personal stories with the broad history of artificial intelligence. I found myself turning the pages of “Thinking Machines” to find out what happens, even though I was there for much of it, and often in the very room.

Continue reading the main story

Dormehl starts with the 1964 World’s Fair — held only miles from where I lived as a high school student in Queens — evoking the anticipation of a nation working on sending a man to the moon. He identifies the early examples of artificial intelligence that captured my own excitement at the time, like IBM’s demonstrations of automated handwriting recognition and language translation. He writes as if he had been there.

Photo

Dormehl describes the early bifurcation of the field into the Symbolic and Connectionist schools, and he captures key points that many historians miss, such as the uncanny confidence of Frank Rosenblatt, the Cornell professor who pioneered the first popular neural network (he called them “perceptrons”). I visited Rosenblatt in 1962 when I was 14, and he was indeed making fantastic claims for this technology, saying it would eventually perform a very wide range of tasks at human levels, including speech recognition, translation and even language comprehension. As Dormehl recounts, these claims were ridiculed at the time, and indeed the machine Rosenblatt showed me in 1962 couldn’t perform any of these things. In 1969, funding for the neural net field was obliterated for about two decades when Marvin Minsky and his M.I.T. colleague Seymour Papert published the book “Perceptrons,” which proved a theorem that perceptrons could not distinguish a connected figure (in which all parts are connected to each other) from a disconnected figure, something a human can do easily.

What Rosenblatt told me in 1962 was that the key to the perceptron achieving human levels of intelligence in many areas of learning was to stack the perceptrons in layers, with the output of one layer forming the input to the next. As it turns out, the Minsky-Papert perceptron theorem applies only to single-layer perceptrons. As Dormehl recounts, Rosenblatt died in 1971 without having had the chance to respond to Minsky and Papert’s book. It would be decades before multi-layer neural nets proved Rosenblatt’s prescience. Minsky was my mentor for 54 years until his death a year ago, and in recent years he lamented the “success” of his book and had become respectful of the recent gains in neural net technology. As Rosenblatt had predicted, neural nets were indeed providing near human-level (and in some cases superhuman levels) of performance on a wide range of intelligent tasks, from translating languages to driving cars to playing Go.

Dormehl examines the pending social and economic impact of artificial intelligence, for example on employment. He recounts the positive history of automation. In 1900, about 40 percent of American workers were employed on farms and over 20 percent in factories. By 2015, these figures had fallen to 2 percent on farms and 8.7 percent in factories. Yet for every job that was eliminated, we invented several new ones, with the work force growing from 24 million people (31 percent of the population in 1900) to 142 million (44 percent of the population in 2015). The average job today pays 11 times as much per hour in constant dollars as it did a century ago. Many economists are saying that while this may all be true, the future will be different because of the unprecedented acceleration of progress. Although expressing some cautions, Dormehl shares my optimism that we will be able to deploy artificial intelligence in the role of brain extenders to keep ahead of this economic curve. As he writes, “Barring some catastrophic risk, A.I. will represent an overall net positive for humanity when it comes to employment.”

Many observers of A.I. and the other 21st-century exponential technologies like biotechnology and nanotechnology attempt to peer into the continuing accelerating gains and fall off the horse. Dormehl ends his book still in the saddle, discussing the prospect of conscious A.I.s that will demand and/or deserve rights, and the possibility of “uploading” our brains to the cloud. I recommend this book to anyone with a lay scientific background who wants to understand what I would argue is today’s most important revolution, where it came from, how it works and what is on the horizon.

Photo

“Heart of the Machine,” the futurist Richard Yonck’s new book, contains its important insight in the title. People often think of feelings as secondary or as a sideshow to intellect, as if the essence of human intelligence is the ability to think logically. If that were true, then machines are already ahead of us. The superiority of human thinking lies in our ability to express a loving sentiment, to create and appreciate music, to get a joke. These are all examples of emotional intelligence, and emotion is at both the bottom and top of our thinking. We still have that old reptilian brain that provides our basic motivations for meeting our physical needs and to which we can trace feelings like anger and jealousy. The neocortex, a layer covering the brain, emerged in mammals two hundred million years ago and is organized as a hierarchy of modules. Two million years ago, we got these big foreheads that house the frontal cortex and enabled us to process language and music.

Yonck provides a compelling and thorough history of the interaction between our emotional lives and our technology. He starts with the ability of the early hominids to fashion stone tools, perhaps the earliest example of technology. Remarkably the complex skills required were passed down from one generation to the next for over three million years, despite the fact that for most of this period, language had not yet been invented. Yonck makes a strong case that it was our early ability to communicate through pre-language emotional expressions that enabled the remarkable survival of this skill, and enabled technology to take root.

Yonck describes today’s emerging technologies for understanding our emotions using images of facial expressions, intonation patterns, respiration, galvanic skin response and other signals — and how these instruments might be adopted by the military and interactive augmented reality experiences. And he recounts how all communication technologies from the first books to today’s virtual reality have had significant sexual applications and will enhance sensual experiences in the future.

Yonck is a sure-footed guide and is not without a sense of humor. He imagines, for example, a scenario a few decades from now with a spirited exchange at the dinner table. “No daughter of mine is marrying a robot and that’s final!” a father exclaims.

His daughter angrily replies: “Michael is a cybernetic person with the same rights you and I have! We’re getting married and there’s nothing you can do to change that!” She storms out of the room.

Yonck concludes that we will merge with our technology — a position I agree with — and that we have been doing so for a long time. He argues, as have I, that merging with future superintelligent A.I.s is our best strategy for ensuring a beneficial outcome. Achieving this requires creating technology that can understand and master human emotion. To those who would argue that such a quest is arrogantly playing God, he says simply: “This is what we do.”

Continue reading the main story

SCIENCE & TECHNOLOGY NEWS

 

LATEST SCIENCE & TECHNOLOGY NEWS

This advance could finally make graphene-based semiconductor chips feasible

March 31, 2017
Atomic force microscopy image of as-deposited (left) and laser-annealed (right) rGO (bottom) thin films. The entire "pulsed laser annealing" process is done at room temperature and atmospheric pressure using high-power nanosecond laser pulses to "melt" the rGO material; it is completed in about 200 nanoseconds. (credit: Anagh Bhaumik and Jagdish Narayan/Journal of Applied Physics)Researchers at North Carolina State University (NC State) have developed a layered material that can be used to develop transistors based on graphene — a long-sought goal in the electronics industry. Graphene has attractive properties, such as extremely high conductivity, meaning it conducts the flow of electrical current really well (compared to copper, for example), …more…

Scientists grow beating heart tissue on spinach leaves

March 31, 2017
(credit: Worcester Polytechnic Institute)How crossing plant and animal kingdoms may lead to radical new tissue-engineering breakthroughs

A research team headed by Worcester Polytechnic Institute (WPI) scientists* has solved a major tissue engineering problem holding back the regeneration of damaged human tissues and organs: how to grow small, delicate blood vessels, which are beyond the capabilities of 3D printing.** The researchers used plant leaves as scaffolds (structures) in an attempt to create the … more…

NEW BLOG POSTS

Vanity Fair | Elon Musk’s billion dollar crusade to stop the AI apocalypse

March 31, 2017
Vanity Fair - A4Ray Kurzweil interview on artificial intelligence futures

publication: Vanity Fair column: Hive story title: Elon Musk’s billion dollar crusade to stop the AI apocalpyse story author: by Maureen Dowd date: April 2017 issue story excerpts from interview with Ray Kurzweil: 1. |  Spiraling capabilities of self-improving artificial intelligence Google has gobbled up almost every robotics & machine learning company.It bought Deep•Mind for $650 … more…

The New York Times • Book Review | How we’ll end up merging with our technology

March 30, 2017
New York Times - Book Review - T4Ray Kurzweil reviews two popular books

Dear readers, Ray Kurzweil wrote a book review for The New York Times • Book Review section. His review is titled “How we we’ll end up merging with our technology.” He reviews two new books on computing and the future of society. Please click to read the full review here exploring these two popular books in science & …more…

Visit KurzweilAI.net