Will Artificial Intelligence Become Conscious?


Ignore today’s small incremental advancements in artificial intelligence, such as the enhancing capabilities of automobiles to drive themselves. Waiting in the wings could be a groundbreaking growth: a machine that knows itself as well as its environments, which might take in and also procedure huge amounts of data in real time. It could be sent on harmful goals, right into room or fight. Along with driving people about, it might be able to prepare, tidy, do washing– as well as keep people company when other people typically aren’t close by.

A specifically innovative set of machines might replace people at essentially all jobs. That would save humankind from workaday grind, yet it would certainly likewise tremble several societal foundations. A life of no job and also only play might turn out to be a dystopia.

Mindful equipments would additionally elevate troubling lawful as well as honest troubles. Would certainly a mindful machine be a “person” under regulation as well as be responsible if its actions hurt someone, or if something goes wrong? To consider an extra frightening circumstance, might these machines rebel against human beings and also desire to eliminate us completely? If yes, they represent the culmination of advancement.

As a teacher of electric design and computer science that operates in artificial intelligence and also quantum theory, I can state that scientists are split on whether these type of hyperaware machines will certainly ever before exist. There’s additionally dispute regarding whether machines might or ought to be called “aware” in the method we think about human beings, as well as some pets, as aware. A few of the questions pertain to innovation; others relate to what consciousness in fact is.

Is Recognition Sufficient?
Many computer researchers assume that awareness is a particular that will certainly emerge as innovation develops. Some believe that awareness includes approving brand-new information, saving as well as fetching old info and cognitive processing of all of it right into assumptions and activities. If that’s right, then one day makers will certainly undoubtedly be the best consciousness. They’ll be able to collect even more information than a human, shop more than numerous collections, accessibility substantial databases in milliseconds and also calculate all of it right into decisions extra complex, and yet a lot more sensible, than any person ever before could.

On the various other hand, there are physicists and also thinkers who claim there’s something more regarding human habits that can not be calculated by a maker. Creative thinking, for example, and also the sense of freedom individuals possess don’t show up ahead from logic or calculations.

Yet these are not the only sights of what consciousness is, or whether devices could ever before accomplish it.

Quantum Views
One more perspective on awareness comes from quantum theory, which is the inmost theory of physics. Inning accordance with the orthodox Copenhagen Analysis, awareness as well as the real world are corresponding aspects of the very same fact. When a person observes, or experiments on, some aspect of the physical world, that individual’s conscious interaction causes noticeable change. Considering that it takes consciousness as a given as well as no effort is made to derive it from physics, the Copenhagen Analysis may be called the “big-C” view of consciousness, where it is a thing that exists on its own– although it needs brains to come to be real. This view was prominent with the pioneers of quantum concept such as Niels Bohr, Werner Heisenberg and Erwin Schrodinger.

The communication in between consciousness and also issue leads to paradoxes that remain unsolved after 80 years of argument. A widely known example of this is the mystery of Schrodinger’s cat, where a pet cat is placed in a scenario that results in it being similarly likely to endure or pass away– and the act of monitoring itself is what makes the end result certain.

The opposing view is that awareness emerges from biology, just as biology itself emerges from chemistry which, consequently, emerges from physics. We call this less extensive concept of consciousness “little-C.” It concurs with the neuroscientists’ sight that the processes of the mind correspond states and processes of the brain. It also agrees with a more recent analysis of quantum theory encouraged by an effort to clear it of mysteries, the Several Worlds Interpretation, where observers belong of the mathematics of physics.

Theorists of science believe that these modern-day quantum physics sights of consciousness have parallels in old philosophy. Big-C resembles the theory of mind in Vedanta– in which consciousness is the essential basis of truth, on the same level with the physical world.

Little-C, on the other hand, is fairly much like Buddhism. Although the Buddha chose not to address the inquiry of the nature of awareness, his fans stated that mind and also consciousness arise out of emptiness or nothingness.

Big-C and Scientific Exploration
Researchers are additionally exploring whether awareness is constantly a computational process. Some scholars have actually suggested that the imaginative moment is not at the end of a calculated computation. For example, fantasizes or visions are intended to have influenced Elias Howe’s 1845 style of the contemporary embroidery device, as well as August Kekule’s exploration of the framework of benzene in 1862.

A dramatic item of proof in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His note pad, which was lost as well as neglected for regarding 50 years and published just in 1988, contains a number of thousand formulas, without evidence in various areas of mathematics, that were well ahead of their time. Moreover, the methods whereby he located the solutions remain elusive. He himself declared that they were revealed to him by a siren while he was asleep.

The principle of big-C consciousness increases the inquiries of exactly how it relates to matter, as well as how matter and also mind mutually affect each other. Awareness alone can not make physical modifications to the world, yet maybe it could change the probabilities in the advancement of quantum procedures. The act of monitoring can freeze and even affect atoms’ motions, as Cornell physicists verified in 2015. This might quite possibly be an explanation of exactly how matter and mind engage.

Mind and Self-Organizing Solutions
It is feasible that the sensation of consciousness calls for a self-organizing system, like the brain’s physical framework. If so, then present makers will come up short.

Scholars aren’t sure if adaptive self-organizing equipments could be designed to be as advanced as the human mind; we lack a mathematical theory of calculation for systems like that. Probably it holds true that just biological machines could be sufficiently imaginative and adaptable. However then that recommends individuals ought to– or quickly will certainly– begin working on engineering brand-new biological frameworks that are, or could end up being, mindful.

Artificial intelligence


Artificial intelligence

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.[2]

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017, include successfully understanding human speech,[5] competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] However, in the early 21st century statistical approaches to machine learning became successful enough to eclipse all other tools, approaches, problems and schools of thought.[11]

The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[14] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[15] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[16] Some people also consider AI a danger to humanity if it progresses unabatedly.[17]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[18]

WELCOME TO MY TECHNICAL UNIVERSE the physics of cognitive systems


I used to have a description of each of my papers on this page, but it got very boring to read as the numbers grew, so I moved most of it to here. After graduate work on the role of atomic and molecular chemistry in cosmic reionization, I have mainly focused my research on issues related to constraining cosmological models. A suite of papers developed methods for analyzing cosmological data sets and applied them to various CMB experiments and galaxy redshift surveys, often in collaboration with the experimentalists who had taken the data. Another series of papers tackled various “dirty laundry” issues such as microwave foregrounds and mass-to-light bias. Other papers like this one develop and apply techniques for clarifying the big picture in cosmology: comparing and combining diverse cosmological probes, cross-checking for consistency and constraining cosmological models and their free parameters. (The difference between cosmology and ice hockey is that I don’t get penalized for cross-checking…) My main current research interest is cosmology theory and phenomenology. I’m particularly enthusiastic about the prospects of comparing and combining current and upcoming data on CMB, LSS, galaxy clusters, lensing, LyA forest clustering, SN 1, 21 cm tomography, etc. to raise the ambition level beyond the current cosmological parameter game, testing rather than assuming the underlying physics. This paper contains my battle cry. I also retain a strong interest in low-level nuts-and-bolts analysis and interpretation of data, firmly believing that the devil is in the details, and am actively working on neutral hydrogen tomography theory, experiment and data analysis for our Omniscope project, which you can read all about here.

OTHER RESEARCH: SIDE INTERESTS Early galaxy formation and the end of the cosmic dark ages One of the main challenges in modern cosmology is to quantify how small density fluctuations at the recombination epoch at redshift around z=1000 evolved into the galaxies and the large-scale structure we observe in the universe today. My Ph.D. thesis with Joe Silk focused on ways of probing the interesting intermediate epoch. The emphasis was on the role played by non-linear feedback, where a small fraction of matter forming luminous objects such as stars or QSO’s can inject enough energy into their surrounding to radically alter subsequent events. We know that the intergalactic medium (IGM) was reionized at some point, but the details of when and how this occurred remain open. The absence of a Gunn-Peterson trough in the spectra of high-redshift quasars suggests that it happened before z=5, which could be achieved through supernova driven winds from early galaxies. Photoionization was thought to be able to partially reionize the IGM much earlier, perhaps early enough to affect the cosmic microwave background (CMB) fluctuations, especially in an open universe. However, extremely early reionization is ruled out by the COBE FIRAS constraints on the Compton y-distortion. To make predictions for when the first objects formed and how big they were, you need to worry about something I hate: molecules. Although I was so fed up with rate discrepancies in the molecule literature that I verged on making myself a Ghostbuster-style T-shirt reading “MOLECULES – JUST SAY NO”, the irony is that my molecule paper that I hated so much ended up being one of my most cited ones. Whereas others that I had lots of fun with went largely unnoticed…

Math problemsI’m also interested in physics-related mathematics problems in general. For instance, if you don’t believe that part of a constrained elliptic metal sheet may bend towards you if you try to push it away, you are making the same mistake that the famous mathematician Hadamard once did.

WELCOME TO MY TECHNICAL UNIVERSE I love working on projects that involve cool questions, great state-of-the-art data and powerful physical/mathematical/computational tools. During my first quarter-century as a physics researcher, this criterion has lead me to work mainly on cosmology and quantum information. Although I’m continuing my cosmology work with the HERA collaboration, the main focus of my current research is on the physics of cognitive systems: using physics-based techniques to understand how brains works and to build better AI (artificial intelligence) systems. If you’re interested in working with me on these topics, please let me know, as I’m potentially looking for new students and postdocs (see requirements). I’m fortunate to have collaborators who generously share amazing neuroscience data with my group, including Ed Boyden, Emery Brown and Tomaso Poggio at MIT and Gabriel Kreimann at Harvard, and to have such inspiring colleagues here in our MIT Physics Department in our new division studying the physics of living systems. I’ve been pleasantly surprised by how many data analysis techniques I’ve developed for cosmology can be adapted to neuroscience data as well. There’s clearly no shortage of fascinating questions surrounding the physics of intelligence, and there’s no shortage of powerful theoretical tools either, ranging from neural network physics and non-equilibrium statistical mechanics to information theory, the renormalization group and deep learning. Intriguingly and surprisingly, there’s a duality between the last two. I recently helped organize conferences on the physics of information and artificial intelligence. I’m very interested in the question of how to model an observer in physics, and if simple necessary conditions for a physical system being a conscious observer can help explain how the familiar object hierarchy of the classical world emerges from the raw mathematical formalism of quantum mechanics. Here’s a taxonomy of proposed consciousness measures. Here’s a TEDx-talk of mine about the physics of consciousness. Here’s an intriguing connection between critical behavior in magnets, language, music and DNA. In older work of mine on the physics of the brain, I showed that neuron decoherence is way too fast for the brain to be a quantum computer. However, it’s nonetheless interesting to study our brains as quantum systems, to better understand why they perceives the sort of classical world that they do. For example, why do we feel that we live in real space rather than Fourier space, even though both are equally valid quantum descriptions related by a unitary transformation?

Quantum information My work on the physics of cognitive systems is a natural outgrowth of my long-standing interest in quantum information, both for enabling new technologies such as quantum computing and for shedding new light on how the world fundamentally works. For example, I’m interested in how the second law of thermodynamics can be generalized to explain how the entropy of a system typically decreases while you observe a system and increases while you don’t, and how this can help explain how inflation causes the emergence of an arrow of time. When you don’t observe an interacting system, you can get decoherence, which I had the joy of rediscovering as a grad student – if you’d like to know more about what this is, check out my article in with John Archibald Wheeler in Scientific American here. I’m interested in decoherence both for its quantitative implications for quantum computing etc and for its philosophical implications for the interpretation of quantum mechanics. For much more on this wackier side of mine, click the banana icon above. Since macroscopic systems are virtually impossible to isolate from their surroundings, a number of quantitative predictions can be made for how their wavefunction will appear to collapse, in good agreement with what we in fact observe. Similar quantitative predictions can be made for models of heat baths, showing how the effects of the environment cause the familiar entropy increase and apparent directionality of time. Intriguingly, decoherence can also be shown to produce generalized coherent states, indicating that these are not merely a useful approximation, but indeed a type of quantum states that we should expect nature to be full of. All these changes in the quantum density matrix can in principle be measured experimentally, with phases and all.

Cosmology My cosmology research has been focused on precision cosmology, e.g., combining theoretical work with new measurements to place sharp constraints on cosmological models and their free parameters. (Skip to here if you already know all this.) Spectacular new measurements are providing powerful tools for this:

So far, I’ve worked mainly on CMB, LSS and 21 cm tomography, with some papers involving lensing, SN Ia and LyAF as well. Why do I find cosmology exciting?(Even if you don’t find cosmology exciting, there are good reasons why you should support physics research.)

  1. There are some very basic questions that still haven’t been answered. For instance,
    • Is really only 5% of our universe made of atoms? So it seems, but what precisely is the weird “dark matter” and “dark energy” that make up the rest?
    • Will the Universe expand forever or end in a cataclysmic crunch or big rip? The smart money is now on the first option, but the jury is still out.
    • How did it all begin, or did it? This is linked to particle physics and unifying gravity with quantum theory.
    • Are there infinitely many other stars, or does space connect back on itself? Most of my colleagues assume it is infinite and the data supports this, but we don’t know yet.
  2. Thanks to an avalanche of great new data, driven by advances in satellite, detector and computer technology, we may be only years away from answering some of these questions.

Satellites Rock! Since our atmosphere messes up most electromagnetic waves coming from space (the main exceptions being radio waves and visible light), the advent of satellites has revolutionized our ability to photograph the Universe in microwaves, infrared light, ultraviolet light, X-rays and gamma rays. New low-temperature detectors have greatly improved what can be done from the ground as well, and the the computer revolution has enabled us to gather and process huge data quantities, doing research that would have been unthinkable twenty years ago. This data avalanche has transformed cosmology from being a mainly theoretical field, occasionally ridiculed as speculative and flaky, into a data-driven quantitative field where competing theories can be tested with ever-increasing precision. I find CMB, LSS, lensing, SN Ia, LyAF, clusters and BBN to be very exciting areas, since they are all being transformed by new high-precision measurements as described below. Since each of them measures different but related aspects of the Universe, they both complement each other and allow lots of cross-checks. What are these cosmological parameters?Cosmic matter budget In our standard cosmological model, the Universe was once in an extremely dense and hot state, where things were essentially the same everywhere in space, with only tiny fluctuations (at the level of 0.00001) in the density. As the Universe expanded and cooled, gravitational instability caused these these fluctuations to grow into the galaxies and the large-scale structure that we observe in the Universe today. To calculate the details of this, we need to know about a dozen numbers, so-called cosmological parameters. Most of these parameters specify the cosmic matter budget, i.e., what the density of the Universe is made up of – the amounts of the following ingredients:

  • Baryons – the kind of particles that you and I and all the chemical elements we learned about in school are madeof : protons & neutrons. Baryons appear to make up only about 5% of all stuff in the Universe.
  • Photons – the particles that make uplight. Their density is the best measured one on this list.
  • Massive neutrinos – neutrinos are very shy particles. They are known to exist, and now at least two of the three or more kinds are known to have mass.
  • Cold dark matter – unseen mystery particles widely believed to exist. There seems to be about five times more of this strange stuff than baryons, making us a minority in the Universe.
  • Curvature – if the total density differs from a certain critical value, space will be curved. Sufficiently high density would make space be finite, curving back on itself like the 3D surface of a 4D hypersphere.
  • Dark energy – little more than a fancy name our ignorance of what seems to make up abouttwo thirdsof the matter budget. One popular candidates is a “Cosmological constant”, a.k.a. Lambda, which Einstein invented and then later called his greatest blunder. Other candidates are more complicated modifications toEinsteinstheory of Gravity as well as energy fields known as “quintessence”. Dark energy causes gravitational repulsion in place of attraction. Einstein invented it and called it his greatest mistake, but combining new SN Ia and CMB data indicates that we might be living with Lambda after all.

Then there are a few parameters describing those tiny fluctuations in the early Universe; exactly how tiny they were, the ratio of fluctuations on small and large scales, the relative phase of fluctuations in the different types of matter, etc. Accurately measuring these parameters would test the most popular theory for the origin of these wiggles, known as inflation, and teach us about physics at much higher energies than are accessible with particle accelerator experiments. Finally, there are a some parameters that Dick Bond, would refer to as “gastrophysics”, since they involve gas and other ghastly stuff. One example is the extent to which feedback from the first galaxies have affected the CMB fluctuations via reionization. Another example is bias, the relation between fluctuations in the matter density and the number of galaxies.One of my main current interests is using the avalanche of new data to raise the ambition level beyond cosmological parameters, testing rather than assuming the underlying physics. My battle cry is published here with nuts and bolts details here and here. The cosmic toolboxHere is a brief summary of some key cosmological observables and what they can teach us about cosmological parameters.

Photos of the cosmic microwave background (CMB) radiation like the one to the left show us the most distant object we can see: a hot, opaque wall of glowing hydrogen plasma about 14 billion light years away. Why is it there? Well, as we look further away, we’re seeing things that happened longer ago, since it’s taken the light a long time to get here. We see the Sun as it was eight minutes ago, the Andromeda galaxy the way it was a few million years ago and this glowing surface as it was just 400,000 years after the Big Bang. We can see that far back since the hydrogen gas that fills intergalactic space is transparent, but we can’t see further, since earlier the hydrogen was so hot that it was an ionized plasma, opaque to light, looking like a hot glowing wall just like the surface of the Sun. The detailed patterns of hotter and colder spots on this wall constitute a goldmine of information about the cosmological parameters mentioned above. If you are a newcomer and want an introduction to CMB fluctuations and what we can learn from them, I’ve written a review here. If you don’t have a physics background, I recommend the on-line tutorials by Wayne Hu and Ned Wright. Two new promising CMB fronts are opening up — CMB polarization and arcminute scale CMB, and are likely to keep the CMB field lively for at leastr another decade. Hydrogen tomography Mapping our universe in 3D by imaging the redshifted 21 cm line from neutral hydrogen has the potential to overtake the cosmic microwave background as our most powerful cosmological probe, because it can map a much larger volume of our Universe, shedding new light on the epoch of reionization, inflation, dark matter, dark energy, and neutrino masses. For this reason, my group built MITEoR, a pathfinder low-frequency radio interferometer whose goal was to test technologies that greatly reduce the cost of such 3D mapping for a given sensitivity. MITEoR accomplished this by using massive baseline redundancy both to enable automated precision calibration and to cut the correlator cost scaling from N2 to N log N, where N is the number of antennas. The success of MITEoR with its 64 dual-polarization elements bodes well for the more ambitious HERA project, which incorporates many of the technologies MITEoR tested using dramatically larger collecting area

.Galaxy cluster Large-scale structure: 3D mapping of the Universe with galaxy redshift surveys offers another window on dark matter properties, through its gravitational effects on galaxy clustering. This field is currently being transformed by everr larger Galaxy Redshift Survey. I’ve had lots of fun working with my colleagues on the Sloan Digital Sky Survey (SDSS) to carefully analyze the gargantuan galaxy maps and work out what they tell us about our cosmic composition, origins and ultimate fate. The abundance of galaxy clusters, the largest gravitationally bound and equilibrated blobs of stuff in the Universe, is a very sensitive probe of both the cosmic expansion history and the growth of matter clustering. Many powerful cluster finding techniques are contributing to rapid growth in the number of known clusters and our knowledge of their properties: identifying them in 3D galaxy surveys, seeing their hot gas as hot spots in X-ray maps or cold spots in microwave maps (the so-called SZ-effect) or spotting their gravitational effects with gravitational lensing.Gravitational lensing Yet another probe of dark matter is offered by gravitational lensing, whereby its gravitational pull bends light rays and distorts images of distant objects. The first large-scale detections of this effect were reported by four groups (astro-ph/0002500, 0003008, 0003014, 0003338) in the year 2000, and I anticipate making heavy use of such measurements as they continue to improve, partly in collaboration with Bhuvnesh Jain at Penn. Lensing is ultimately as promising as CMB and is free from the murky bias issues plaguing LSS and LyAF measurements, since it probes the matter density directly via its gravitational pull. I’ve also dabbled some in the stronger lensing effects caused by galaxy cores, which offer additional insights into the detailed nature of the dark matter.Supernovae Ia: Supernovae If a white dwarf (the corpse of a burned-out low-mass star like our Sun) orbits another dying star, it may gradually steal its gas and exceed the maximum mass with which it can be stable. This makes it collapse under its own weight and blow up in a cataclysmic explosion called a supernova of type Ia. Since all of these cosmic bombs weigh the same when they go off (about 1.4 solar masses, the so-called Chandrasekhar mass), they all release roughly the same amount of energy – and a more detailed calibration of this energy is possible by measuring how fast it dims, making it the best “standard candle” visible at cosmological distances. The supernova cosmology project and the high z SN search team mapped out how bright SN Ia looked at different redshifts found the first evidence in 1998 that the expansion of the Universe was accelerating. This approach can ultimately provide a direct measurement of the density of the Universe as a function of time, helping unravel the nature of dark energy – I hope the SNAP project or one of its competitores gets funded. The image to the left resulted from a different type of supernova, but I couldn’t resist showing it anyway..

.Lyman Alpha Forest The so-called Lyman Alpha Forest, cosmic gas clouds backlit by quasars, offers yet another new and exciting probe of how dark has clumped ordinary matter together, and is sensitive to an epoch when the Universe was merely 10-20% of its present age. Although relating the measured absorption to the densities of gas and dark matter involves some complications, it completely circumvents the Pandora’s of galaxy biasing. Cosmic observations are rapidly advancing on many other fronts as well, e.g., with direct measurements of the cosmic expansion rate and the cosmic baryon fraction.

Professor Tom Leighton wins 2018 Marconi Prize

MIT professor of mathematics Tom Leighton has been selected to receive the 2018 Marconi Prize. The Marconi Society, dedicated to furthering scientific achievements in communications and the Internet, is honoring Leighton for his fundamental contributions to technology and the establishment of the content delivery network (CDN) industry.

Leighton ’81, a professor in the Department of Mathematics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), will be awarded at The Marconi Society’s annual awards dinner in Bologna, Italy, on Oct. 2.

“Being recognized by the Marconi Society is an incredible honor,” said Leighton. “It’s an honor not just for me, but also for Danny Lewin, who created this company with me, and for all of the people at Akamai who have worked so hard for over two decades to make this technology real so that the internet can scale to be a secure and affordable platform where entertainment, business, and life are enabled to reach unimagined potential.

Leighton developed the algorithms now used to deliver trillions of content requests over the internet every day. Akamai, the world’s largest cloud delivery platform, routes and replicates content over a gigantic network of distributed servers, using algorithms to find and utilize servers closest to the end user, thereby avoiding congestion within the internet.

“Tom’s work at MIT and with Akamai has had a groundbreaking impact in making the world a more connected place,” says Professor Daniela Rus, director of CSAIL. “His insights on web content delivery have played a key role in enabling us to share information and media online, and all of us at CSAIL are so very proud of him for this honor.”

“What is amazing about Tom is that, throughout his career, he is and has been as comfortable and talented as a researcher designing clever and efficient algorithms, as an educator teaching and mentoring our undergraduate and graduate students, as an entrepreneur turning mathematical and algorithmic ideas into a rapidly-expanding startup, and as an executive and industry leader able to weather the storm in the most difficult times and bring Akamai to a highly successful company,” says Michel Goemans, interim head of the mathematics department.

Leighton has said that Akamai’s role within the internet revolution was to end the “World Wide Wait.” World Wide Web founder and 2002 Marconi Fellow Tim Berners-Lee, who was the 3Com Founders chair at MIT’s Laboratory for Computer Science (LCS), foresaw an internet congestion issue and in 1995 challenged his MIT colleagues to invent a better way to deliver content. Leighton set out with one of his brightest students, Danny Lewin, to solve this challenge using distributed computing algorithms.

After two years of research, Leighton and Lewin discovered a solution — but then faced the challenge of convincing others that it would work. In 1997, they entered the $50K Entrepreneurship Competition run by the MIT Sloan School of Management.

“We literally went to the library and got the equivalent of ‘Business Plans for Dummies’ because, as theoretical mathematicians, we had no experience in business,” Leighton remembers. But they learned quickly from those who did, including business professionals they met through the $50K Competition.

At the time, Leighton and Lewin didn’t envision building their own company around the technology. Instead, they planned to license it to service providers. However, they found that carriers needed to be convinced that the technology would work at scale before they were interested. “Akamai was state-of-the-art in theory, meaning that it was well beyond where people were in practice. I think folks were very skeptical that it would work,” says Leighton.

While carriers were ambivalent, content providers were receptive: The internet had proven vulnerable to congestion that was crashing websites during high demand periods. So Leighton and Lewin decided to build their own content delivery network and provide content delivery as a service. Although their business plan did not win the $50K contest, it attracted enough venture capital investment to get a company started, and Leighton and Lewin incorporated Akamai in 1998.

Akamai’s first big opportunity came in 1999 with the U.S. collegiate basketball tournament known as “March Madness.” With 64 teams playing basketball during the course of a few days, millions of viewers were watching their favorite teams online, mostly from work. When ESPN and their hosting company Infoseek became overloaded with traffic, they asked if Akamai could handle 2,000 content requests per second.

Leighton and his team said yes — even though up to that point they had only been delivering one request every few minutes. “We were a startup and we believed,” said Leighton. Akamai was able to handle 3,000 requests per second, helping ESPN to get back on line and run six times faster than they would on a normal traffic day.

Akamai’s technology and viability were proven; the company went public in 1999, earning millions for several of its young employees. But when the tech bubble burst the next year, Akamai’s stock plummeted and the firm faced the prospect of retrenchment. Then, on September 11, 2001, Danny Lewin was killed aboard American Airlines Flight 11 in the terrorist attack on the Twin Towers. Akamai employees had to set aside their personal grief and complete emergency integrations to restore client sites that had crashed in the overwhelming online traffic created that day.

Akamai rebounded from that dark period, and over the years evolved from static image content to handle dynamic content and real-time applications like streaming video. Today, Akamai has over 240,000 servers in over 130 countries and within more than 1,700 networks around the world, handling about 20 to 30 percent of the traffic on the internet. Akamai accelerates trillions of internet requests each day, protects web and mobile assets from targeted application and DDoS attacks, and enables internet users to have a seamless and secure experience across different device types and network conditions. They created new technology for leveraging machine learning to analyze real-user behavior to continuously optimize a website’s performance, as well as algorithms that differentiate between human users and bots. Akamai’s security business surpassed half a billion dollars per year in revenue, making it the fastest growing part of Akamai’s business.

“Dr. Leighton is the embodiment of what the Marconi Prize honors,” says Vint Cerf, Marconi Society chair and chief internet evangelist at Google. “He and his research partner, Danny Lewin, tackled one of the major problems limiting the power of the internet, and when they developed the solution, they founded Akamai — now one of the premier technology companies in the world — to bring it to market. This story is truly remarkable.”

By receiving the Marconi Prize, Leighton joins a distinguished list of scientists whose work underlies all of modern communication technology, from the microprocessor to the internet, and from optical fiber to the latest wireless breakthroughs. Other Marconi Fellows include 2007 winner Ron Rivest, an Institute Professor, a member of CSAIL and the lab’s Theory of Computation Group, and a founder of its Cryptography and Information Security Group; and LIDS adjunct Dave Forney, ScD (EE) ’65, who received it in 1997.  

In 2016, the MIT Graduate School Council awarded Leighton, jointly with Dean of Science Michael Sipser, the Irwin Sizer Award, for most significant improvements to MIT education, specifically for their development of the successful 18C major: Mathematics with Computer Science. Leighton was also inducted into the National Inventors Hall of Fame in 2017 for Content Delivery Network methods; Danny Lewin was also inducted posthumously.

Leighton said he plans to donate the $100,000 Marconi Prize to The Akamai Foundation, with the goal of promoting the pursuit of excellence in mathematics in grades K-12 to encourage the next generation of technology innovators.


p class=”wpematico_credit”>Powered by WPeMatico

Soft robotic fish swims alongside real ones in coral reefs

This month scientists published rare footage of one of the Arctic’s most elusive sharks. The findings demonstrate that, even with many technological advances in recent years, it remains a challenging task to document marine life up close.

But MIT computer scientists believe they have a possible solution: using robots.

In a paper out today, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled “SoFi,” a soft robotic fish that can independently swim alongside real fish in the ocean.

During test dives in the Rainbow Reef in Fiji, SoFi swam at depths of more than 50 feet for up to 40 minutes at once, nimbly handling currents and taking high-resolution photos and videos using (what else?) a fisheye lens.

Using its undulating tail and a unique ability to control its own buoyancy, SoFi can swim in a straight line, turn, or dive up or down. The team also used a waterproofed Super Nintendo controller and developed a custom acoustic communications system that enabled them to change SoFi’s speed and have it make specific moves and turns.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” says CSAIL PhD candidate Robert Katzschmann, lead author of the new journal article published today in Science Robotics. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

Katzschmann worked on the project and wrote the paper with CSAIL director Daniela Rus, graduate student Joseph DelPreto and former postdoc Robert MacCurdy, who is now an assistant professor at the University of Colorado at Boulder.

How it works

Existing autonomous underwater vehicles (AUVs) have traditionally been tethered to boats or powered by bulky and expensive propellers.

In contrast, SoFi has a much simpler and more lightweight setup, with a single camera, a motor, and the same lithium polymer battery that’s found in consumer smartphones. To make the robot swim, the motor pumps water into two balloon-like chambers in the fish’s tail that operate like a set of pistons in an engine. As one chamber expands, it bends and flexes to one side; when the actuators push water to the other channel, that one bends and flexes in the other direction.

These alternating actions create a side-to-side motion that mimics the movement of a real fish. By changing its flow patterns, the hydraulic system enables different tail maneuvers that result in a range of swimming speeds, with an average speed of about half a body length per second.

“The authors show a number of technical achievements in fabrication, powering, and water resistance that allow the robot to move underwater without a tether,” says Cecilia Laschi, a professor of biorobotics at the Sant’Anna School of Advanced Studies in Pisa, Italy. “A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.”

The entire back half of the fish is made of silicone rubber and flexible plastic, and several components are 3-D-printed, including the head, which holds all of the electronics. To reduce the chance of water leaking into the machinery, the team filled the head with a small amount of baby oil, since it’s a fluid that will not compress from pressure changes during dives.

Indeed, one of the team’s biggest challenges was to get SoFi to swim at different depths. The robot has two fins on its side that adjust the pitch of the fish for up and down diving. To adjust its position vertically, the robot has an adjustable weight compartment and a “buoyancy control unit” that can change its density by compressing and decompressing air.

Katzschmann says that the team developed SoFi with the goal of being as nondisruptive as possible in its environment, from the minimal noise of the motor to the ultrasonic emissions of the team’s communications system, which sends commands using wavelengths of 30 to 36 kilohertz.

“The robot is capable of close observations and interactions with marine life and appears to not be disturbing to real fish,” says Rus.

The project is part of a larger body of work at CSAIL focused on soft robots, which have the potential to be safer, sturdier, and more nimble than their hard-bodied counterparts. Soft robots are in many ways easier to control than rigid robots, since researchers don’t have to worry quite as much about having to avoid collisions.

“Collision avoidance often leads to inefficient motion, since the robot has to settle for a collision-free trajectory,” says Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “In contrast, a soft robot is not just more likely to survive a collision, but could use it as information to inform a more efficient motion plan next time around.”

As next steps the team will be working on several improvements on SoFi. Katzschmann plans to increase the fish’s speed by improving the pump system and tweaking the design of its body and tail.

He says that they also plan to soon use the on-board camera to enable SoFi to automatically follow real fish, and to build additional SoFis for biologists to study how fish respond to different changes in their environment.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” says Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

This project was supported by the National Science Foundation.


p class=”wpematico_credit”>Powered by WPeMatico

How Expert system Advertising is Altering the Video game

Expert system marketing takes things one action better than Search Engine Optimization practices and so on: with machine learning, AI can find out as well as change formulas to function extra efficiently. This means that as you utilize an AI-powered application, it becomes better at its job. As well as many thanks to natural language processing, individuals could connect with artificial intelligence based devices equally as they would with a human.

Using Expert system to Develop Actual Relationships

Retention Scientific research (RS) is a B-to-B Expert system advertising and marketing technology that helps sellers as well as brand names understand, engage and keep their customers. It properly forecasts customer habits and also utilizes those understandings to carry out one-to-one email, web site as well as mobile advertising and marketing campaigns at range to raise conversion rates as well as earnings. Founded in 2013 and locateded in Los Angeles, Retention Science powers campaigns for Target, Buck Shave Club, The Honest Business, BCBG, Damp Seal, as well as lots of other ingenious ecommerce brand names.

Customer Advertising and marketing 2017

We are going into a new age of advertising and marketing– the era of Artificial Intelligence Advertising (AIM)– an age where devices run 1,000’s of recursive tests and also take care of the mathematical optimization of customer worth growth, yet the online marketer stays in control while spending more time being strategic and also imaginative. This session will certainly explore the foundations of AIM and check out real-world usage instances where consumers from markets such as gaming, telco, as well as banking have realized product growth in consumer worth metrics including consumer retention and also typical income per customer (ARPU).

Expert system Marketing (GOAL).

Did I state 100 lessons? I suggested 4. Why 4? It fits. Lesson number 4. Heros do win, under encouraging and also over delivering does job, as well as artificial intelligence marketing is real. OK, that is three lessons pounded into one sentence, as well as one of them is a cliche, yet again, this is my blog site, I get to write exactly what I desire. I anticipate getting on stage, ordering the mic, and pitching. My fascination with advertising and marketing modern technology is absolutely clear. Over the last years, I have remained to be among the most energetic financiers in the area. This will certainly be fun. Allow the change begin. All set to embrace tomorrow.

3 Reasons that Artificial Intelligence Marketing is Here to Keep|WGN Radio – 720 AM.

Exactly what was as soon as viewed as the material of sci-fi films, expert system seems much more of a fact than previously anticipated. Artificial intelligence advertising can play such a massive duty in the growth of brand name analysis as well as customer interactions. Between belief evaluation, customer care possibilities, and marketing optimization, artificial intelligence allows marketing professionals to obtain a far better understanding of their consumer base.

3 Questions: The future of transportation systems

Daniel Sperling is a distinguished professor of civil engineering and environmental science and policy at the University of California at Davis, where he is also founding director of the school’s Institute of Transportation Studies. Sperling, a member of the California Air Resources Board, recently gave a talk at MITEI detailing major technological and societal developments that have the potential to change transportation for the better — or worse. Following the event, Sperling spoke to MITEI about policy, science, and how to harness these change agents for the public good.

(Sperling’s talk is also available as a podcast.)

Q: What are the downsides of the “car-centric monoculture,” as you put it, that we find ourselves living in?

A: Cars provide great value, which is why they are so popular. But too much of a good thing can be destructive. We’ve gone too far. We’ve created a transportation system made up of massive road systems and parking infrastructure that is incredibly expensive for travelers and for society to build and maintain. It is also very energy- and carbon-intensive, and disadvantages those unable to buy and drive cars.

Q: Can you tell me about the three transportation revolutions that you say are going to transform mobility over the next few decades?

A: The three revolutions are electrification, automation, and pooling. Electrification is already under way, with increasing numbers of pure battery electric vehicles, plug-in hybrid vehicles that combine batteries and combustion engines, and fuel cell electric vehicles that run on hydrogen. I currently own a hydrogen car (Toyota Mirai) and have owned two different battery electric cars (Nissan Leaf and Tesla).

A second revolution, automation, is not yet under way, at least in the form of driverless cars. But it is poised to be truly transformational and disruptive for many industries — including automakers, rental cars, infrastructure providers, and transit operators. While partially automated cars are already here, true transformations await fully driverless vehicles, which are not likely to exist in significant numbers for a decade or more.

Perhaps the most pivotal revolution, at least in terms of assuring that the automation revolution serves the public interest, is pooling, or sharing. Automation without pooling would lead to large increases in vehicle use. With pooling, though, automation would lead to reductions in vehicle use, but increases in mobility (passenger miles traveled) by mobility-disadvantaged travelers who are too poor or disabled to drive.

Q: You’ve mentioned that how these revolutions play out depends on which cost factor dominates — money or time. The result would either be heaven or hell for our environment and cities. Explain the nuances of that situation.

A: With pooled, automated and electric cars, the cost of travel would drop precipitously as a result of using cars intensively — spreading costs over 100,000 miles or more per year — having no driver costs, and having multiple riders share the cost. The monetary cost could be as little as 15 cents per mile, versus 60 cents per mile for an individually-owned automated car traveling 15,000 miles per year. The time cost of car occupants, on the other hand, is near zero because they don’t need to pay attention to driving. They can work, sleep, text, drink, and read. Thus, even if the cost of owning and operating the vehicle is substantial, the time savings would be so beneficial that many, perhaps most, would choose car ownership over subscribing to an on-demand service. In fact, most people in affluent countries would likely choose the huge time savings, worth $10, $20, or more per hour, over low travel costs. Thus, policy will be needed to assure that the public interest — environmental externalities, urban livability, access by the mobility disadvantaged — is favored over the gains of a minority of individuals.


p class=”wpematico_credit”>Powered by WPeMatico

Clear As well as Objective Truths Regarding Exactly what Is A Drone? (Without All the Buzz).

A drone, also called an unmanned aerial automobile (UAV) as well as many other names, is a device that will fly without the use of a pilot or any individual aboard. These ‘aircraft’ could be managed from another location making use of a remote control tool by a person standing on the ground or by using computers that are on-board. UAV’s initially were usually controlled by someone on the ground but as modern technology has progressed, increasingly more aircraft are being made with the aim of being controlled through on-board computer systems.

The idea of an unmanned aerial vehicle could be traced back to early in the twentieth century as well as were originally planned to be solely made use of for armed forces missions yet have since found location in our daily lives. Reginald Denny, that was a preferred movie celebrity along with an avid collection agency of design planes was claimed to create the very first remote piloted car in 1935. Given that this date, the aircraft have had the ability to adjust to brand-new technologies and could currently be found with electronic cameras as well as other valuable bonus. As an outcome of this, UAVs are utilized for policing, safety work as well as monitoring and firefighting, they are also made use of by lots of firms to look at hard to get to possessions such as piping as well as wirework adding an added layer of safety and protection.

The increase in popularity of these tools has however, brought some downsides in addition to positives as new policies as well as regulations have had to be presented to manage the circumstance. As the UAVs were getting more powerful as well as innovations were enhancing, it meant that they could fly greater and additionally away from the operator. This has actually led to some problems with flight terminal interference all over the world. In 2014, South Africa introduced that they needed to tighten up security when it comes to unlawful flying in South African airspace. A year later as well as the US announced that they were holding a meeting to review the needs of signing up a commercial drone.

As well as the previously discussed uses, drones are currently likewise made use of for surveying of plants, counting animals in a certain location, evaluating a group amongst many others. Drones have managed to change the manner in which numerous markets are run and also have additionally allowed many organisations to come to be more efficient. Drones have also assisted to boost safety and also contribute when it comes to saving lives. Woodland fires as well as all-natural calamities can be kept track of and also the drone can be utilized to signal the pertinent authorities of any person that is in problem and also looking for help. The exact location of these events can also be discovered easily.

Drones have additionally come to be a leisure activity for many individuals worldwide. In the United States, entertainment use of such a device is lawful; nonetheless, the owner needs to take some preventative measures when trying to fly. The aircraft has to abide by specific guidelines that have actually been set out; for instance, the tool can not be greater than 55 pounds. The drone must also prevent being used in such a way that will hinder airport terminal procedures and also if a drone is flown within 5 miles of an airport terminal, the airports traffic control tower should be warned in advance.