LATEST SCIENCE & TECHNOLOGY NEWS


LATEST SCIENCE & TECHNOLOGY NEWS

Neural probes for the spinal cord

April 6, 2017
Researchers have developed a rubber-like fiber, shown here, that can flex and stretch while simultaneously delivering both optical impulses, for optoelectronic stimulation, and electrical connections, for stimulation and monitoring. (credit: Chi (Alice) Lu and Seongjun Park)Rubber-like fiber can flex and stretch and can be used for optoelectronic and electrical stimulation/monitoring

A research team led by MIT scientists has developed rubbery fibers for neural probes that can flex and stretch and be implanted into the mouse spinal cord. The goal is to study spinal cord neurons and ultimately develop treatments to alleviate spinal cord injuries in humans. That requires matching the stretchiness, softness, and flexibility of … more…

Astronomers detect atmosphere around Earth-like planet

April 6, 2017
Artist’s impression  of atmosphere around super-Earth planet GJ 1132b (credit: MPIA)Astronomers have detected an atmosphere around an Earth-like planet beyond our solar system for the first time: the super-Earth planet GJ 1132b in the Southern constellation Vela, at a distance of 39 light-years from Earth. The team, led by Keele University’s John Southworth, PhD, used the 2.2 m ESO/MPG telescope in Chile to take images … more…

NEW EVENTS

starship-congressStarship Congress 2017

Dates: Aug 7 – 9, 2017
Location: Monterey, California
more…

Visit KurzweilAI.net

Meet the scientists building digital ‘brains’ for your phone

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs


BRAIN ACTIVITY MAP
Neuroscape Lab

AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.

“Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.

Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.

What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.

Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.

SUBSCRIBE TO WIRED

This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.

Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.

Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.

“Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,” Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.

Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.

Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.

“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.

“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.

When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.

Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.

With the rise of neuromorphics, and tools like Nengo, we could soon have AI’s capable of exhibiting a stunning level of natural intelligence – right on our phones.

    Artificial emotional intelligence

  • Member Profit No Investment, No Website, No Traffic, No List Membership Profit, No Investment, No List, No Website The Most Simple Internet Marketing Skills…HINT! It’s Got Nothing To Do With Adsense, Push Button Riches or Social Media…
  • Power Productivity Series WSO Power Productivity Series is an interview series with Brad Gosse, Tom Ness, Jason Kanigan, Jarrett Stevenson, and 4 other successful warriors on productivity.

Heart of the Machine — THINKING MACHINES

Photo

CreditEleni Kalorkoti

THINKING MACHINES
The Quest for Artificial Intelligence — and Where It’s Taking Us Next
By Luke Dormehl
275 pp. TarcherPerigee. Paper, $16.

HEART OF THE MACHINE
Our Future in a World of Artificial Emotional Intelligence
By Richard Yonck
312 pp. Arcade Publishing. $25.99.

Books about science and especially computer science often suffer from one of two failure modes. Treatises by scientists sometimes fail to clearly communicate insights. Conversely, the work of journalists and other professional writers may exhibit a weak understanding of the science in the first place.

Luke Dormehl is the rare lay person — a journalist and filmmaker — who actually understands the science (and even the math) and is able to parse it in an edifying and exciting way. He is also a gifted storyteller who interweaves the personal stories with the broad history of artificial intelligence. I found myself turning the pages of “Thinking Machines” to find out what happens, even though I was there for much of it, and often in the very room.

Continue reading the main story

Dormehl starts with the 1964 World’s Fair — held only miles from where I lived as a high school student in Queens — evoking the anticipation of a nation working on sending a man to the moon. He identifies the early examples of artificial intelligence that captured my own excitement at the time, like IBM’s demonstrations of automated handwriting recognition and language translation. He writes as if he had been there.

Photo

Dormehl describes the early bifurcation of the field into the Symbolic and Connectionist schools, and he captures key points that many historians miss, such as the uncanny confidence of Frank Rosenblatt, the Cornell professor who pioneered the first popular neural network (he called them “perceptrons”). I visited Rosenblatt in 1962 when I was 14, and he was indeed making fantastic claims for this technology, saying it would eventually perform a very wide range of tasks at human levels, including speech recognition, translation and even language comprehension. As Dormehl recounts, these claims were ridiculed at the time, and indeed the machine Rosenblatt showed me in 1962 couldn’t perform any of these things. In 1969, funding for the neural net field was obliterated for about two decades when Marvin Minsky and his M.I.T. colleague Seymour Papert published the book “Perceptrons,” which proved a theorem that perceptrons could not distinguish a connected figure (in which all parts are connected to each other) from a disconnected figure, something a human can do easily.

What Rosenblatt told me in 1962 was that the key to the perceptron achieving human levels of intelligence in many areas of learning was to stack the perceptrons in layers, with the output of one layer forming the input to the next. As it turns out, the Minsky-Papert perceptron theorem applies only to single-layer perceptrons. As Dormehl recounts, Rosenblatt died in 1971 without having had the chance to respond to Minsky and Papert’s book. It would be decades before multi-layer neural nets proved Rosenblatt’s prescience. Minsky was my mentor for 54 years until his death a year ago, and in recent years he lamented the “success” of his book and had become respectful of the recent gains in neural net technology. As Rosenblatt had predicted, neural nets were indeed providing near human-level (and in some cases superhuman levels) of performance on a wide range of intelligent tasks, from translating languages to driving cars to playing Go.

Dormehl examines the pending social and economic impact of artificial intelligence, for example on employment. He recounts the positive history of automation. In 1900, about 40 percent of American workers were employed on farms and over 20 percent in factories. By 2015, these figures had fallen to 2 percent on farms and 8.7 percent in factories. Yet for every job that was eliminated, we invented several new ones, with the work force growing from 24 million people (31 percent of the population in 1900) to 142 million (44 percent of the population in 2015). The average job today pays 11 times as much per hour in constant dollars as it did a century ago. Many economists are saying that while this may all be true, the future will be different because of the unprecedented acceleration of progress. Although expressing some cautions, Dormehl shares my optimism that we will be able to deploy artificial intelligence in the role of brain extenders to keep ahead of this economic curve. As he writes, “Barring some catastrophic risk, A.I. will represent an overall net positive for humanity when it comes to employment.”

Many observers of A.I. and the other 21st-century exponential technologies like biotechnology and nanotechnology attempt to peer into the continuing accelerating gains and fall off the horse. Dormehl ends his book still in the saddle, discussing the prospect of conscious A.I.s that will demand and/or deserve rights, and the possibility of “uploading” our brains to the cloud. I recommend this book to anyone with a lay scientific background who wants to understand what I would argue is today’s most important revolution, where it came from, how it works and what is on the horizon.

Photo

“Heart of the Machine,” the futurist Richard Yonck’s new book, contains its important insight in the title. People often think of feelings as secondary or as a sideshow to intellect, as if the essence of human intelligence is the ability to think logically. If that were true, then machines are already ahead of us. The superiority of human thinking lies in our ability to express a loving sentiment, to create and appreciate music, to get a joke. These are all examples of emotional intelligence, and emotion is at both the bottom and top of our thinking. We still have that old reptilian brain that provides our basic motivations for meeting our physical needs and to which we can trace feelings like anger and jealousy. The neocortex, a layer covering the brain, emerged in mammals two hundred million years ago and is organized as a hierarchy of modules. Two million years ago, we got these big foreheads that house the frontal cortex and enabled us to process language and music.

Yonck provides a compelling and thorough history of the interaction between our emotional lives and our technology. He starts with the ability of the early hominids to fashion stone tools, perhaps the earliest example of technology. Remarkably the complex skills required were passed down from one generation to the next for over three million years, despite the fact that for most of this period, language had not yet been invented. Yonck makes a strong case that it was our early ability to communicate through pre-language emotional expressions that enabled the remarkable survival of this skill, and enabled technology to take root.

Yonck describes today’s emerging technologies for understanding our emotions using images of facial expressions, intonation patterns, respiration, galvanic skin response and other signals — and how these instruments might be adopted by the military and interactive augmented reality experiences. And he recounts how all communication technologies from the first books to today’s virtual reality have had significant sexual applications and will enhance sensual experiences in the future.

Yonck is a sure-footed guide and is not without a sense of humor. He imagines, for example, a scenario a few decades from now with a spirited exchange at the dinner table. “No daughter of mine is marrying a robot and that’s final!” a father exclaims.

His daughter angrily replies: “Michael is a cybernetic person with the same rights you and I have! We’re getting married and there’s nothing you can do to change that!” She storms out of the room.

Yonck concludes that we will merge with our technology — a position I agree with — and that we have been doing so for a long time. He argues, as have I, that merging with future superintelligent A.I.s is our best strategy for ensuring a beneficial outcome. Achieving this requires creating technology that can understand and master human emotion. To those who would argue that such a quest is arrogantly playing God, he says simply: “This is what we do.”

Continue reading the main story