Featured

Will Artificial Intelligence Become Conscious?

Ignore today’s small incremental advancements in artificial intelligence, such as the enhancing capabilities of automobiles to drive themselves. Waiting in the wings could be a groundbreaking growth: a machine that knows itself as well as its environments, which might take in and also procedure huge amounts of data in real time. It could be sent on harmful goals, right into room or fight. Along with driving people about, it might be able to prepare, tidy, do washing– as well as keep people company when other people typically aren’t close by.

A specifically innovative set of machines might replace people at essentially all jobs. That would save humankind from workaday grind, yet it would certainly likewise tremble several societal foundations. A life of no job and also only play might turn out to be a dystopia.

Mindful equipments would additionally elevate troubling lawful as well as honest troubles. Would certainly a mindful machine be a “person” under regulation as well as be responsible if its actions hurt someone, or if something goes wrong? To consider an extra frightening circumstance, might these machines rebel against human beings and also desire to eliminate us completely? If yes, they represent the culmination of advancement.

As a teacher of electric design and computer science that operates in artificial intelligence and also quantum theory, I can state that scientists are split on whether these type of hyperaware machines will certainly ever before exist. There’s additionally dispute regarding whether machines might or ought to be called “aware” in the method we think about human beings, as well as some pets, as aware. A few of the questions pertain to innovation; others relate to what consciousness in fact is.

Is Recognition Sufficient?
Many computer researchers assume that awareness is a particular that will certainly emerge as innovation develops. Some believe that awareness includes approving brand-new information, saving as well as fetching old info and cognitive processing of all of it right into assumptions and activities. If that’s right, then one day makers will certainly undoubtedly be the best consciousness. They’ll be able to collect even more information than a human, shop more than numerous collections, accessibility substantial databases in milliseconds and also calculate all of it right into decisions extra complex, and yet a lot more sensible, than any person ever before could.

On the various other hand, there are physicists and also thinkers who claim there’s something more regarding human habits that can not be calculated by a maker. Creative thinking, for example, and also the sense of freedom individuals possess don’t show up ahead from logic or calculations.

Yet these are not the only sights of what consciousness is, or whether devices could ever before accomplish it.

Quantum Views
One more perspective on awareness comes from quantum theory, which is the inmost theory of physics. Inning accordance with the orthodox Copenhagen Analysis, awareness as well as the real world are corresponding aspects of the very same fact. When a person observes, or experiments on, some aspect of the physical world, that individual’s conscious interaction causes noticeable change. Considering that it takes consciousness as a given as well as no effort is made to derive it from physics, the Copenhagen Analysis may be called the “big-C” view of consciousness, where it is a thing that exists on its own– although it needs brains to come to be real. This view was prominent with the pioneers of quantum concept such as Niels Bohr, Werner Heisenberg and Erwin Schrodinger.

The communication in between consciousness and also issue leads to paradoxes that remain unsolved after 80 years of argument. A widely known example of this is the mystery of Schrodinger’s cat, where a pet cat is placed in a scenario that results in it being similarly likely to endure or pass away– and the act of monitoring itself is what makes the end result certain.

The opposing view is that awareness emerges from biology, just as biology itself emerges from chemistry which, consequently, emerges from physics. We call this less extensive concept of consciousness “little-C.” It concurs with the neuroscientists’ sight that the processes of the mind correspond states and processes of the brain. It also agrees with a more recent analysis of quantum theory encouraged by an effort to clear it of mysteries, the Several Worlds Interpretation, where observers belong of the mathematics of physics.

Theorists of science believe that these modern-day quantum physics sights of consciousness have parallels in old philosophy. Big-C resembles the theory of mind in Vedanta– in which consciousness is the essential basis of truth, on the same level with the physical world.

Little-C, on the other hand, is fairly much like Buddhism. Although the Buddha chose not to address the inquiry of the nature of awareness, his fans stated that mind and also consciousness arise out of emptiness or nothingness.

Big-C and Scientific Exploration
Researchers are additionally exploring whether awareness is constantly a computational process. Some scholars have actually suggested that the imaginative moment is not at the end of a calculated computation. For example, fantasizes or visions are intended to have influenced Elias Howe’s 1845 style of the contemporary embroidery device, as well as August Kekule’s exploration of the framework of benzene in 1862.

A dramatic item of proof in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His note pad, which was lost as well as neglected for regarding 50 years and published just in 1988, contains a number of thousand formulas, without evidence in various areas of mathematics, that were well ahead of their time. Moreover, the methods whereby he located the solutions remain elusive. He himself declared that they were revealed to him by a siren while he was asleep.

The principle of big-C consciousness increases the inquiries of exactly how it relates to matter, as well as how matter and also mind mutually affect each other. Awareness alone can not make physical modifications to the world, yet maybe it could change the probabilities in the advancement of quantum procedures. The act of monitoring can freeze and even affect atoms’ motions, as Cornell physicists verified in 2015. This might quite possibly be an explanation of exactly how matter and mind engage.

Mind and Self-Organizing Solutions
It is feasible that the sensation of consciousness calls for a self-organizing system, like the brain’s physical framework. If so, then present makers will come up short.

Scholars aren’t sure if adaptive self-organizing equipments could be designed to be as advanced as the human mind; we lack a mathematical theory of calculation for systems like that. Probably it holds true that just biological machines could be sufficiently imaginative and adaptable. However then that recommends individuals ought to– or quickly will certainly– begin working on engineering brand-new biological frameworks that are, or could end up being, mindful.

Featured

Will we be replaced by robots? — A Better Man

I was asked the question by the head of a new startup for AI as their technology aims to change the world I quote. Will jobs done by regular people be replaced? Fast Company predicts these will be the jobs that will be the worst hit. 1. INSURANCE UNDERWRITERS AND CLAIMS REPRESENTATIVES 2. BANK TELLERS […]

via Will we be replaced by robots? — A Better Man

Featured

Artificial intelligence

Artificial intelligence

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.[2]

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017, include successfully understanding human speech,[5] competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] However, in the early 21st century statistical approaches to machine learning became successful enough to eclipse all other tools, approaches, problems and schools of thought.[11]

The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[14] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[15] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[16] Some people also consider AI a danger to humanity if it progresses unabatedly.[17]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[18]

Featured

WELCOME TO MY TECHNICAL UNIVERSE the physics of cognitive systems

I used to have a description of each of my papers on this page, but it got very boring to read as the numbers grew, so I moved most of it to here. After graduate work on the role of atomic and molecular chemistry in cosmic reionization, I have mainly focused my research on issues related to constraining cosmological models. A suite of papers developed methods for analyzing cosmological data sets and applied them to various CMB experiments and galaxy redshift surveys, often in collaboration with the experimentalists who had taken the data. Another series of papers tackled various “dirty laundry” issues such as microwave foregrounds and mass-to-light bias. Other papers like this one develop and apply techniques for clarifying the big picture in cosmology: comparing and combining diverse cosmological probes, cross-checking for consistency and constraining cosmological models and their free parameters. (The difference between cosmology and ice hockey is that I don’t get penalized for cross-checking…) My main current research interest is cosmology theory and phenomenology. I’m particularly enthusiastic about the prospects of comparing and combining current and upcoming data on CMB, LSS, galaxy clusters, lensing, LyA forest clustering, SN 1, 21 cm tomography, etc. to raise the ambition level beyond the current cosmological parameter game, testing rather than assuming the underlying physics. This paper contains my battle cry. I also retain a strong interest in low-level nuts-and-bolts analysis and interpretation of data, firmly believing that the devil is in the details, and am actively working on neutral hydrogen tomography theory, experiment and data analysis for our Omniscope project, which you can read all about here.

OTHER RESEARCH: SIDE INTERESTS Early galaxy formation and the end of the cosmic dark ages One of the main challenges in modern cosmology is to quantify how small density fluctuations at the recombination epoch at redshift around z=1000 evolved into the galaxies and the large-scale structure we observe in the universe today. My Ph.D. thesis with Joe Silk focused on ways of probing the interesting intermediate epoch. The emphasis was on the role played by non-linear feedback, where a small fraction of matter forming luminous objects such as stars or QSO’s can inject enough energy into their surrounding to radically alter subsequent events. We know that the intergalactic medium (IGM) was reionized at some point, but the details of when and how this occurred remain open. The absence of a Gunn-Peterson trough in the spectra of high-redshift quasars suggests that it happened before z=5, which could be achieved through supernova driven winds from early galaxies. Photoionization was thought to be able to partially reionize the IGM much earlier, perhaps early enough to affect the cosmic microwave background (CMB) fluctuations, especially in an open universe. However, extremely early reionization is ruled out by the COBE FIRAS constraints on the Compton y-distortion. To make predictions for when the first objects formed and how big they were, you need to worry about something I hate: molecules. Although I was so fed up with rate discrepancies in the molecule literature that I verged on making myself a Ghostbuster-style T-shirt reading “MOLECULES – JUST SAY NO”, the irony is that my molecule paper that I hated so much ended up being one of my most cited ones. Whereas others that I had lots of fun with went largely unnoticed…

Math problemsI’m also interested in physics-related mathematics problems in general. For instance, if you don’t believe that part of a constrained elliptic metal sheet may bend towards you if you try to push it away, you are making the same mistake that the famous mathematician Hadamard once did.

WELCOME TO MY TECHNICAL UNIVERSE I love working on projects that involve cool questions, great state-of-the-art data and powerful physical/mathematical/computational tools. During my first quarter-century as a physics researcher, this criterion has lead me to work mainly on cosmology and quantum information. Although I’m continuing my cosmology work with the HERA collaboration, the main focus of my current research is on the physics of cognitive systems: using physics-based techniques to understand how brains works and to build better AI (artificial intelligence) systems. If you’re interested in working with me on these topics, please let me know, as I’m potentially looking for new students and postdocs (see requirements). I’m fortunate to have collaborators who generously share amazing neuroscience data with my group, including Ed Boyden, Emery Brown and Tomaso Poggio at MIT and Gabriel Kreimann at Harvard, and to have such inspiring colleagues here in our MIT Physics Department in our new division studying the physics of living systems. I’ve been pleasantly surprised by how many data analysis techniques I’ve developed for cosmology can be adapted to neuroscience data as well. There’s clearly no shortage of fascinating questions surrounding the physics of intelligence, and there’s no shortage of powerful theoretical tools either, ranging from neural network physics and non-equilibrium statistical mechanics to information theory, the renormalization group and deep learning. Intriguingly and surprisingly, there’s a duality between the last two. I recently helped organize conferences on the physics of information and artificial intelligence. I’m very interested in the question of how to model an observer in physics, and if simple necessary conditions for a physical system being a conscious observer can help explain how the familiar object hierarchy of the classical world emerges from the raw mathematical formalism of quantum mechanics. Here’s a taxonomy of proposed consciousness measures. Here’s a TEDx-talk of mine about the physics of consciousness. Here’s an intriguing connection between critical behavior in magnets, language, music and DNA. In older work of mine on the physics of the brain, I showed that neuron decoherence is way too fast for the brain to be a quantum computer. However, it’s nonetheless interesting to study our brains as quantum systems, to better understand why they perceives the sort of classical world that they do. For example, why do we feel that we live in real space rather than Fourier space, even though both are equally valid quantum descriptions related by a unitary transformation?

Quantum information My work on the physics of cognitive systems is a natural outgrowth of my long-standing interest in quantum information, both for enabling new technologies such as quantum computing and for shedding new light on how the world fundamentally works. For example, I’m interested in how the second law of thermodynamics can be generalized to explain how the entropy of a system typically decreases while you observe a system and increases while you don’t, and how this can help explain how inflation causes the emergence of an arrow of time. When you don’t observe an interacting system, you can get decoherence, which I had the joy of rediscovering as a grad student – if you’d like to know more about what this is, check out my article in with John Archibald Wheeler in Scientific American here. I’m interested in decoherence both for its quantitative implications for quantum computing etc and for its philosophical implications for the interpretation of quantum mechanics. For much more on this wackier side of mine, click the banana icon above. Since macroscopic systems are virtually impossible to isolate from their surroundings, a number of quantitative predictions can be made for how their wavefunction will appear to collapse, in good agreement with what we in fact observe. Similar quantitative predictions can be made for models of heat baths, showing how the effects of the environment cause the familiar entropy increase and apparent directionality of time. Intriguingly, decoherence can also be shown to produce generalized coherent states, indicating that these are not merely a useful approximation, but indeed a type of quantum states that we should expect nature to be full of. All these changes in the quantum density matrix can in principle be measured experimentally, with phases and all.

Cosmology My cosmology research has been focused on precision cosmology, e.g., combining theoretical work with new measurements to place sharp constraints on cosmological models and their free parameters. (Skip to here if you already know all this.) Spectacular new measurements are providing powerful tools for this:

So far, I’ve worked mainly on CMB, LSS and 21 cm tomography, with some papers involving lensing, SN Ia and LyAF as well. Why do I find cosmology exciting?(Even if you don’t find cosmology exciting, there are good reasons why you should support physics research.)

  1. There are some very basic questions that still haven’t been answered. For instance,
    • Is really only 5% of our universe made of atoms? So it seems, but what precisely is the weird “dark matter” and “dark energy” that make up the rest?
    • Will the Universe expand forever or end in a cataclysmic crunch or big rip? The smart money is now on the first option, but the jury is still out.
    • How did it all begin, or did it? This is linked to particle physics and unifying gravity with quantum theory.
    • Are there infinitely many other stars, or does space connect back on itself? Most of my colleagues assume it is infinite and the data supports this, but we don’t know yet.
  2. Thanks to an avalanche of great new data, driven by advances in satellite, detector and computer technology, we may be only years away from answering some of these questions.

Satellites Rock! Since our atmosphere messes up most electromagnetic waves coming from space (the main exceptions being radio waves and visible light), the advent of satellites has revolutionized our ability to photograph the Universe in microwaves, infrared light, ultraviolet light, X-rays and gamma rays. New low-temperature detectors have greatly improved what can be done from the ground as well, and the the computer revolution has enabled us to gather and process huge data quantities, doing research that would have been unthinkable twenty years ago. This data avalanche has transformed cosmology from being a mainly theoretical field, occasionally ridiculed as speculative and flaky, into a data-driven quantitative field where competing theories can be tested with ever-increasing precision. I find CMB, LSS, lensing, SN Ia, LyAF, clusters and BBN to be very exciting areas, since they are all being transformed by new high-precision measurements as described below. Since each of them measures different but related aspects of the Universe, they both complement each other and allow lots of cross-checks. What are these cosmological parameters?Cosmic matter budget In our standard cosmological model, the Universe was once in an extremely dense and hot state, where things were essentially the same everywhere in space, with only tiny fluctuations (at the level of 0.00001) in the density. As the Universe expanded and cooled, gravitational instability caused these these fluctuations to grow into the galaxies and the large-scale structure that we observe in the Universe today. To calculate the details of this, we need to know about a dozen numbers, so-called cosmological parameters. Most of these parameters specify the cosmic matter budget, i.e., what the density of the Universe is made up of – the amounts of the following ingredients:

  • Baryons – the kind of particles that you and I and all the chemical elements we learned about in school are madeof : protons & neutrons. Baryons appear to make up only about 5% of all stuff in the Universe.
  • Photons – the particles that make uplight. Their density is the best measured one on this list.
  • Massive neutrinos – neutrinos are very shy particles. They are known to exist, and now at least two of the three or more kinds are known to have mass.
  • Cold dark matter – unseen mystery particles widely believed to exist. There seems to be about five times more of this strange stuff than baryons, making us a minority in the Universe.
  • Curvature – if the total density differs from a certain critical value, space will be curved. Sufficiently high density would make space be finite, curving back on itself like the 3D surface of a 4D hypersphere.
  • Dark energy – little more than a fancy name our ignorance of what seems to make up abouttwo thirdsof the matter budget. One popular candidates is a “Cosmological constant”, a.k.a. Lambda, which Einstein invented and then later called his greatest blunder. Other candidates are more complicated modifications toEinsteinstheory of Gravity as well as energy fields known as “quintessence”. Dark energy causes gravitational repulsion in place of attraction. Einstein invented it and called it his greatest mistake, but combining new SN Ia and CMB data indicates that we might be living with Lambda after all.

Then there are a few parameters describing those tiny fluctuations in the early Universe; exactly how tiny they were, the ratio of fluctuations on small and large scales, the relative phase of fluctuations in the different types of matter, etc. Accurately measuring these parameters would test the most popular theory for the origin of these wiggles, known as inflation, and teach us about physics at much higher energies than are accessible with particle accelerator experiments. Finally, there are a some parameters that Dick Bond, would refer to as “gastrophysics”, since they involve gas and other ghastly stuff. One example is the extent to which feedback from the first galaxies have affected the CMB fluctuations via reionization. Another example is bias, the relation between fluctuations in the matter density and the number of galaxies.One of my main current interests is using the avalanche of new data to raise the ambition level beyond cosmological parameters, testing rather than assuming the underlying physics. My battle cry is published here with nuts and bolts details here and here. The cosmic toolboxHere is a brief summary of some key cosmological observables and what they can teach us about cosmological parameters.

Photos of the cosmic microwave background (CMB) radiation like the one to the left show us the most distant object we can see: a hot, opaque wall of glowing hydrogen plasma about 14 billion light years away. Why is it there? Well, as we look further away, we’re seeing things that happened longer ago, since it’s taken the light a long time to get here. We see the Sun as it was eight minutes ago, the Andromeda galaxy the way it was a few million years ago and this glowing surface as it was just 400,000 years after the Big Bang. We can see that far back since the hydrogen gas that fills intergalactic space is transparent, but we can’t see further, since earlier the hydrogen was so hot that it was an ionized plasma, opaque to light, looking like a hot glowing wall just like the surface of the Sun. The detailed patterns of hotter and colder spots on this wall constitute a goldmine of information about the cosmological parameters mentioned above. If you are a newcomer and want an introduction to CMB fluctuations and what we can learn from them, I’ve written a review here. If you don’t have a physics background, I recommend the on-line tutorials by Wayne Hu and Ned Wright. Two new promising CMB fronts are opening up — CMB polarization and arcminute scale CMB, and are likely to keep the CMB field lively for at leastr another decade. Hydrogen tomography Mapping our universe in 3D by imaging the redshifted 21 cm line from neutral hydrogen has the potential to overtake the cosmic microwave background as our most powerful cosmological probe, because it can map a much larger volume of our Universe, shedding new light on the epoch of reionization, inflation, dark matter, dark energy, and neutrino masses. For this reason, my group built MITEoR, a pathfinder low-frequency radio interferometer whose goal was to test technologies that greatly reduce the cost of such 3D mapping for a given sensitivity. MITEoR accomplished this by using massive baseline redundancy both to enable automated precision calibration and to cut the correlator cost scaling from N2 to N log N, where N is the number of antennas. The success of MITEoR with its 64 dual-polarization elements bodes well for the more ambitious HERA project, which incorporates many of the technologies MITEoR tested using dramatically larger collecting area

.Galaxy cluster Large-scale structure: 3D mapping of the Universe with galaxy redshift surveys offers another window on dark matter properties, through its gravitational effects on galaxy clustering. This field is currently being transformed by everr larger Galaxy Redshift Survey. I’ve had lots of fun working with my colleagues on the Sloan Digital Sky Survey (SDSS) to carefully analyze the gargantuan galaxy maps and work out what they tell us about our cosmic composition, origins and ultimate fate. The abundance of galaxy clusters, the largest gravitationally bound and equilibrated blobs of stuff in the Universe, is a very sensitive probe of both the cosmic expansion history and the growth of matter clustering. Many powerful cluster finding techniques are contributing to rapid growth in the number of known clusters and our knowledge of their properties: identifying them in 3D galaxy surveys, seeing their hot gas as hot spots in X-ray maps or cold spots in microwave maps (the so-called SZ-effect) or spotting their gravitational effects with gravitational lensing.Gravitational lensing Yet another probe of dark matter is offered by gravitational lensing, whereby its gravitational pull bends light rays and distorts images of distant objects. The first large-scale detections of this effect were reported by four groups (astro-ph/0002500, 0003008, 0003014, 0003338) in the year 2000, and I anticipate making heavy use of such measurements as they continue to improve, partly in collaboration with Bhuvnesh Jain at Penn. Lensing is ultimately as promising as CMB and is free from the murky bias issues plaguing LSS and LyAF measurements, since it probes the matter density directly via its gravitational pull. I’ve also dabbled some in the stronger lensing effects caused by galaxy cores, which offer additional insights into the detailed nature of the dark matter.Supernovae Ia: Supernovae If a white dwarf (the corpse of a burned-out low-mass star like our Sun) orbits another dying star, it may gradually steal its gas and exceed the maximum mass with which it can be stable. This makes it collapse under its own weight and blow up in a cataclysmic explosion called a supernova of type Ia. Since all of these cosmic bombs weigh the same when they go off (about 1.4 solar masses, the so-called Chandrasekhar mass), they all release roughly the same amount of energy – and a more detailed calibration of this energy is possible by measuring how fast it dims, making it the best “standard candle” visible at cosmological distances. The supernova cosmology project and the high z SN search team mapped out how bright SN Ia looked at different redshifts found the first evidence in 1998 that the expansion of the Universe was accelerating. This approach can ultimately provide a direct measurement of the density of the Universe as a function of time, helping unravel the nature of dark energy – I hope the SNAP project or one of its competitores gets funded. The image to the left resulted from a different type of supernova, but I couldn’t resist showing it anyway..

.Lyman Alpha Forest The so-called Lyman Alpha Forest, cosmic gas clouds backlit by quasars, offers yet another new and exciting probe of how dark has clumped ordinary matter together, and is sensitive to an epoch when the Universe was merely 10-20% of its present age. Although relating the measured absorption to the densities of gas and dark matter involves some complications, it completely circumvents the Pandora’s of galaxy biasing. Cosmic observations are rapidly advancing on many other fronts as well, e.g., with direct measurements of the cosmic expansion rate and the cosmic baryon fraction.

3Q: D. Fox Harrell on his video game for the #MeToo era

The Imagination, Computation, and Expression Laboratory at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has released a new video game called Grayscale, which is designed to sensitize players to problems of sexism, sexual harassment, and sexual assault in the workplace. D. Fox Harrell, the lab’s director, and students in his course CMS.628 (Advanced Identity Representation) completed the initial version of the game more than a year ago, and the ICE Lab has been working on it consistently since. But it addresses many of the themes brought to the fore by the recent #MeToo movement. The game is built atop the ICE Lab’s Chimeria computational platform, which was designed to give computer systems a more subtle, flexible, and dynamic model of how humans categorize members of various groups. MIT News spoke to Harrell, a professor of digital media and artificial intelligence in CSAIL and Comparative Media Studies/Writing, about Grayscale (or to give it its more formal name, Chimeria:Grayscale).

Q: How does the game work?

A: You’re playing the role of an employee of a corporation called Grayscale. It’s a kind of melancholy place: Everything is gray toned. The interface looks like a streamlined email interface. You’re a temporary human resources manager, and as you play, messages begin coming in. And the messages from other employees have embedded within them evidence of different types of sexism from the Fiske and Glick social-science model.

We chose this particular model of sexism because it addresses this notion of ambivalent sexism, which includes both hostile sexism — which is the very overt sexism that we know well and could include everything from heinous assaults to gender discrimination — and what they call “benevolent sexism.” It’s not benevolent in the sense that it’s anything good; it’s oppressive too. Fixing a woman’s computer for her under the assumption she cannot do it herself, these researchers would say, is “protective paternalism.” “Complimentary gender differentiation” involves statements like, “Oh, you must be so emotionally adept.”

Over the course of the week you have new emails coming in, new fires to put out. Some of them are more subtle. For instance, the office temperature is deemed to be too cold by some employee. There’s been research that shows that’s a place of inequity because people perceive temperature differently, in part based on gender or even clothing that we typically associate with gender.

That’s a kind of gentle introduction into this. But some of them are more obvious in different sorts of ways. So a co-worker, say, commenting that wearing yoga pants in the office is (a) unprofessional and (b) distracting. He sends that to the entire list. So do you tell everyone to look at the manual for the dress code? Or do you comment to this guy? Or do you tell everybody it’s actually commenting on your coworker’s attire being “distracting” that’s the problem?

Other emails deal more directly with assault, like somebody who touched somebody inappropriately in an office space.

So you have to make choices about all of these different options. You might have four draft messages, as if you’d been deliberating about which one you’re going to send, and then you finally hit reply with one of your possible drafts. And on the back end, we have each of those connected with particular ways that sexism is exhibited.

The thing that people find compelling about it is that there’s not always an easy answer for each of the questions. You might find tension between one answer and another. Should I send this to the entire list, or should I send it just to the person directly? Or you might think, I really hate the way this guy phrased this email, but at the same time, maybe there are standards within the manual.

Finally, you get your performance evaluation at the very end of the story. We didn’t want it to be straightforward, that if you’ve been nonsexist you get the job, and if you’ve been sexist you don’t. You end up with some kinds of tensions, because maybe you’ve been promoted, but you compromised your values. Maybe you’re kept on but not really seen as a team player, so you have to watch your step. You’re navigating those kinds of tensions between what is seen as the corporate culture, what would get you ahead, and your own personal thoughts about the sexism that’s displayed.

This also isn’t the only vector through which you get feedback. You’re also getting feedback based on what happens to the other characters as well.

Q: Whom do you envision playing this game?

A: There have been thematic indie games that have come out recently. There’s a game addressing issues like isolation and human connection, Firewatch, that was pretty popular. And games about social issues, like the game Dys4ia, which is a game about gender dysphoria.

There was also a lot of press recently about a game called Hair Nah. This was a game related to the fact that for a lot of African-American women, other people like to touch their hair in a way that’s as irritating as it is othering. Such games act like editorials about particular topics. They are not novels, but more like opinion pieces about an issue.

People who like this type of indie game, I think, [would like Grayscale].

We intend for it to be a compelling narrative. That means understanding the back stories of the co-workers, getting to know their personalities. So there could be a bit of humor, a bit of pathos.

Q: How does the Chimeria platform work?

A: At the core is the Chimeria engine, which models social-category membership with more nuance than a lot of other systems — in particular, building on models that come from cognitive science on how humans cognitively categorize. We enable people to be members of multiple categories or to have gradient degrees of categories and have those categories change over time. It’s a patent-pending technology I’m in the process of spinning out now through my company called Blues Identity Systems.

Most computational systems that categorize users — whether that’s your social-media profile or e-commerce account or video-game character — model category membership in almost a taxonomic way: If you have a certain number of features that are defined to be the features of that category, then you’re going to be a member of that category.

In cognitive science, researchers like George Lakoff and Eleanor Rosch have this idea that actually that’s not the way the human brain categorizes. Eleanor Rosch’s famous work argues that we categorize based on prototypes. When people categorize, say, a bird, it’s not because we’re going down this list of features: “Does it have feathers?” “Check.” “Does it have a beak?” It’s more that we have a typical bird in our mind, and we look at how it relates to that prototype. If you say, think of a bird, the idea is people wouldn’t think of a penguin or ostrich. They’d think of something that is prototypical to them — for example, in the U.S. it might be a robin. And then there’s gradient membership from there.

So what I thought was, what if we could take out the taxonomic model currently in a lot of systems and replace it with this more nuanced model? What new kinds of possibilities emerge from there?

One of the first papers we wrote about Chimeria involved using it for authoring conversations in games. A lot of times now, it’s a branching narrative: You have four choices, say, and four more for each of those, and so on. That’s exponential growth in terms of choices.

Instead, we can look at your category. Have you been playing as a physically oriented character, like a warrior? Have you been playing aggressively? And so on. And then based upon your category membership — and how it’s been changing — we can customize conversation.

So instead of branching plot points, you might have wild cards within the text that change based upon the current category that you’re in — or the trajectory. It actually breaks bottlenecks in authoring, but it also opens up new types of expressive possibilities.

<

p class=”wpematico_credit”>Powered by WPeMatico

Three EECS professors join leadership team

The Department of Electrical Engineering and Computer Science (EECS) has announced the appointment of two new associate department heads, and the creation of the new role of associate department head for strategic directions. 

Professors Saman Amarasinghe and Joel Voldman have been named as new associate department heads, effective immediately, says EECS Department Head Asu Ozdaglar. Ozdaglar became department head on Jan. 1, replacing Anantha Chandrakasan, who is now dean of the School of Engineering. Professor Nancy Lynch will be the inaugural holder of the new position of associate department head for strategic directions, overseeing new academic and research initiatives.

“I am thrilled to be starting my own new role in collaboration with such a strong leadership team,” says Ozdaglar, who is also the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering and Computer Science. “All three are distinguished scholars and dedicated educators whose experience will contribute greatly to shaping the department’s future.” 

Saman Amarasinghe leads the Commit compiler research group at the Computer Science and Artificial Intelligence Laboratory (CSAIL). His group focuses on programming languages and compilers that maximize application performance on modern computing platforms. It has developed the Halide, TACO, Simit, StreamIt, StreamJIT, PetaBricks, MILK, Cimple, and GraphIt domain-specific languages and compilers, which all combine language design and sophisticated compilation techniques to deliver unprecedented performance for targeted application domains such as image processing, stream computations, and graph analytics.

Amarasinghe also pioneered the application of machine learning for compiler optimization, from Meta optimization in 2003 to OpenTuner extendable autotuner today. He was the co-leader of the Raw architecture project with EECS Professor and edX CEO Anant Agarwal. Recently, his work received a best-paper award at the 2017 Association for Computing Machinery (ACM) Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA) conference and a best student-paper award at the 2017 Big Data conference.

Amarasinghe was the founder of Determina Inc., a startup based on computer security research pioneered in his MIT research group and later acquired by VMware. He is the faculty director for MIT Global Startup Labs, whose summer programs in 17 countries have helped launch more than 20 startups.

A faculty member since 1997, Amarasinghe served as an EECS education officer and currently chairs the department’s computer science graduate admissions committee. He developed the popular class 6.172 (Performance Engineering of Software Systems) with Charles Leiserson, the Edwin Sibley Webster Professor of EECS.  Recently, he has created individualized software project classes such as the Open Source Software Project Lab, the Open Source Entrepreneurship Lab, and the Bring Your Own Software Project Lab.

He received a bachelor’s degree in EECS from Cornell University, and a master’s degree and PhD in electrical engineering from Stanford University. Amarasinghe succeeds Lynch, who had been an associate department head since September 2016.   

Joel Voldman is a professor in EECS and a principal investigator in the Research Laboratory of Electronics (RLE) and the Microsystems Technology Laboratories (MTL).

He received a bachelor’s degree in electrical engineering from the University of Massachusetts, Amherst, and SM and PhD degrees in electrical engineering from MIT. During his time at MIT, he developed biomedical microelectromechanical systems for single-cell analysis. 

Afterward, he was a postdoctoral associate in George Church’s lab at Harvard Medical School, where he studied developmental biology. He returned to MIT as an assistant professor in EECS in 2001. He was awarded the NBX Career Development Chair in 2004, became an associate professor in 2006, and was promoted to professor in 2013.

Voldman’s research focuses on developing microfluidic technology for biology and medicine, with an emphasis on cell sorting and stem cell biology. He has developed a host of technologies to arrange, culture, and sort diverse cell types, including immune cells, endothelial cells, and stem cells. Current areas of research include recapitulating the induction of atherosclerosis on a microfluidic chip, and using microfluidic tools to study how immune cells decide to attack tumor cells. He is also interested in translational medical work, such as developing point-of-care drop-of-blood assays for proteins and rapid microfluidic tests for immune cell activation for the treatment of sepsis. 

In addition, Voldman has co-developed two introductory EECS courses. One class, 6.03 (Introduction to EECS via Medical Technology), uses medical devices to introduce EECS concepts such as signal processing and machine learning. The other, more recent class, 6.S08/6.08 (Interconnected Embedded Systems ), uses the Internet of Things to introduce EECS concepts such as system partitioning, energy management, and hardware/software co-design. 

Voldman’s awards and honors include a National Science Foundation (NSF) CAREER award, an American Chemical Society (ACS) Young Innovator Award, a Bose Fellow grant, MIT’s Jamieson Teaching Award, a Louis D. Smullin (’39) Award for Teaching Excellence from EECS, a Frank Quick Faculty Research Innovation Fellowship from EECS, an IEEE/ACM Best Advisor Award, and awards for posters and presentations at international conferences. Voldman succeeds Ozdaglar as ADH.

Nancy Lynch, the NEC Professor of Software Science and Engineering, also heads the Theory of Distributed Systems research group in CSAIL. 

She is known for her fundamental contributions to the foundations of distributed computing. Her work applies a mathematical approach to explore the inherent limits on computability and complexity in distributed systems. Her best-known research is the FLP impossibility result for distributed consensus in the presence of process failures. Other research includes the I/O automata system modeling frameworks. Her recent work focuses on wireless network  algorithms and biological distributed algorithms.

Lynch has written or co-written hundreds of research articles. She is the author of the textbook “Distributed Algorithms” and co-author of “Atomic Transactions” and “The Theory of Timed I/O Automata.” She is an ACM Fellow, a Fellow of the American Academy of Arts and Sciences, and a member of the National Academy of Science and the National Academy of Engineering.  She has received the Dijkstra Prize twice, the van Wijngaarden prize, the Knuth Prize, the Piore Prize, and the Athena Prize.

A member of the MIT faculty since 1982, Lynch has supervised 30 PhD students and similar numbers of master’s-degree candidates and postdoctoral associates, many of whom have themselves become research leaders. She received a bachelor’s degree from Brooklyn College and a PhD from MIT, both in mathematics.

<

p class=”wpematico_credit”>Powered by WPeMatico

Full Body Scanners Market Is Expected To Grow At A CAGR Of 10.2% During The Forecast Period From 2017 To 2025

According to a new market research report published by Credence Research “Full Body Scanners Market (By Technology – Millimeter Wave Scanners and Backscatter X-ray Scanners; By Application – Airports and Transportation Terminal and Others) – Growth, Future Prospects, Competitive Analysis and Forecast 2017 – 2025”, the global full body scanners market was valued at US$ 198.8 Mn in 2016 and is expected to grow at a CAGR of 10.2% during the forecast period from 2017 to 2025.

Request Sample: http://www.credenceresearch.com/sample-request/58633

A View Of complete report is available at http://www.credenceresearch.com/report/full-body-scanners-market

Market Insights

Rising security threats and growing concerns pertaining to public safety have led to the increasing adoption of numerous advanced technologies related to physical security. Full body scanners have offered a viable solution to various security agencies to effectively scan and detect harmful objects on people with minimal requirement of physical contact. Factors such as growing anguish among commuters towards physical pat downs coupled with technological advancements in imaging technologies have led to the growth in the full body scanners market. Most of the major airports especially in the developed economies have deployed full body scanners in order to effectively ensure public safety. In addition, full body scanners have been witnessing increasing adoption in various other applications such as correctional facilities, courthouses and government facilities, among others.

Competitive Insights:

The global full body scanners market is fairly consolidated with a limited number of full body scanner providers operating across the world. Some of the leading providers in the full body scanners market include Smiths Group Plc., Rapiscan Systems, Adani Systems, Inc., L3 Security & Detection Systems, Westminster International Ltd, Iscon Imaging, Millivision Technologies, OD Security, Nuctech Company Limited and Brijot Imaging Systems, Inc.

Key Trends:

– Increasing investments towards physical security at airports and other transportation terminals

– Rising adoption of millimeter wave scanners on account of its benefits over backscatter X-ray scanners

Powered by WPeMatico

International Scams Recognition Week Safety Experts anti-fraud experts as well as neighborhoods

Ahead of International Fraudulence Recognition Week (13-19th November), which combines anti-fraud experts as well as neighborhoods to talk about just how far reaching the results of fraudulence could be as well as how to minimize the dangers, IT protection experts Wyatt, Managing Supervisor, and John Cassey, Director at Protiviti, a worldwide working as a consultant firm commented below.

” Fraudulence danger administration can only be effective if those in charge of identifying fraudulence situations have a complete understanding of the criminal mind.”

” Organisations need to have efficient controls that equal to prospective fraud risks, regularly evaluated and upgraded as the business evolves as well as new risks are recognized. One of the most effective control, however, is with the staff members themselves. There need to be a common understanding of acceptable behaviors and that all staff members are responsible for preventing and also determining misbehavior. Promoting a positive message as well as fulfilling high standards could be more reliable in urging an unified corporate culture than a negative campaign focussed on the consequences of wrongdoing.”

” Workers should also be offered with ample training to comprehend just how both external as well as inner fraud could affect the business as well as the indication, consisting of cyber-crime as well as phishing strikes.”

The majority of info safety programmes have currently ended up being cyber safety and security programmes and also are extremely heavily weighted in the direction of handling the unsophisticated outsider hazard. This may in many cases be the typical, bothersome danger, nevertheless one of the most substantial security breaches and scams commonly entail experts, either as willing or unsuspecting participants (e.g. as a result of a phishing strike). In truth, a lot of the largest fraudulences have in fact be launched by an insider. However, lots of are not advertised as organisations choose to manage the cases inside.”

” Organisations should, consequently, invest much more time concentrating on insiders, as well as considering fortunate access as well as data loss avoidance (DLP) specifically. Licensing fortunate access on usage as well as enforcing partition of duties via process at a deal level are also key and can help dramatically. The use of arising technologies that leverage data analytics as well as expert system to recognize modifications in behavior (behavioural analytics) of employees can greatly boost control as well as aid organisations manage expense.”

Internet Protection Top 10 Tips for Remaining Protected Online

The whole concern of Web Security could be intricate as well as frustrating, as well as this is why some individuals have the tendency to ignore this problem completely. In this article we have actually damaged down the subject of internet safety right into ideas that you could conveniently utilize, as well as comprehend.

Get involved in the routine of producing new passwords for whatever you do online. This includes financial, e-mails, PayPal account, membership accounts as well as even more. Make certain they are distinct as well as are not too straightforward and easy to hack or think.

Frequently change your passwords, when every 2 months at least is recommended. Make your passwords hard to think by making them long. Consist of a selection of letters, icons and numbers, some with uppercase.

Don’t make use of a single word as this is less complicated to presume.
Maintain any kind of programs that you consume to this day. This includes programs such as Java, Microsoft Workplace, Adobe Visitor, accounting programs etc. If you have extra software program then uninstall it.

The fewer programs you have on your computer system decreases your possibilities of being assaulted.
Buy a good anti-virus as well as keep it upgraded. Search for a program that protects you from spyware, malware and also viruses.

Beware of just what websites you check out. Try to find those that make use of a hypertext transfer protocol (http) with an “s”, https://. This procedure in fact verifies the website as well as the internet server, giving you added defense.

Be careful when opening your emails. You need to never ever open up any e-mail that has an add-on from an unidentified resource. This is among the most typical methods to contaminate your computer. Enter the practice of reviewing the heading of your e-mail. If they don’t sound right, or are simply one word, after that beware. If the e-mail claims a close friend sent it, talk to them first before opening it.

Hackers like to get you to click something. Watch out for popups that instantly show up telling you that you have actually won a cost-free gift or prize. You will end up with much more than you imagined!

Another appear that usually shows up while you are surfing are those that inform you among your programs needs upgrading, or that you have a virus. If you click on them you will be asked to download a program to fix the concern. Before you understand it your computer system has simply been assaulted.

Beware where you purchase Apps from. Attempt to stick with the main shops or the original business, as smaller sized on-line store fronts could be a front for cyberpunks.
An additional area to be cautious of is when downloading complimentary software program. They will often desire you to download various other totally free presents, which can include infections. Always download and install from a trustworthy site.

Internet Safety and security Top 10 Tips for Staying Protected Online

The entire concern of Web Protection can be intricate and overwhelming, as well as this is why some people tend to ignore this issue entirely. In this post we have broken down the subject of internet safety right into suggestions that you could quickly utilize, and also comprehend.

Enter the habit of developing new passwords for everything you do on the internet. This includes banking, e-mails, PayPal account, membership accounts and more. See to it they are distinct as well as are not as well easy and also easy to hack or think.

Frequently change your passwords, when every 2 months at the minimum is advised. Make your passwords hard to presume by making them long. Include a variety of letters, signs as well as numbers, some with uppercase.

Do not use a single word as this is less complicated to presume.
Keep any programs that you consume to date. This includes programs such as Java, Microsoft Workplace, Adobe Reader, accounting programs and so on. If you have unused software application then uninstall it.

The fewer programs you carry your computer system decreases your opportunities of being attacked.
Invest in a good anti-virus and maintain it updated. Search for a program that safeguards you from spyware, malware and also viruses.

Be careful of what sites you visit. Seek those that utilize a hypertext transfer protocol (http) with an “s”, https://. This procedure in fact verifies the website and also the web server, providing you added protection.

Take care when opening your e-mails. You should never open up any type of email that has an accessory from an unidentified source. This is one of the most typical methods to infect your computer. Get involved in the routine of reading the headline of your e-mail. If they do not sound right, or are simply one word, then beware. If the email states a buddy sent it, consult them first before opening it.

Hackers love to obtain you to click something. Keep an eye out for popups that instantly appear telling you that you have won a totally free present or reward. You will certainly end up with far more compared to you planned on!

One more pop up that typically appears while you are surfing are those that tell you one of your programs needs updating, or that you have a virus. If you click them you will certainly be asked to download and install a program to fix the concern. Before you recognize it your computer system has actually simply been assaulted.

Beware where you purchase Applications from. Aim to stick with the primary shops or the original business, as smaller sized online store fronts can be a front for cyberpunks.
Another area to be careful of is when downloading totally free software application. They will usually want you to download various other totally free presents, which can include viruses. Constantly download and install from a credible website.

Aerospace Composites Market Is Expanding At A CAGR Of 8.5% From 2017 to 2025

Global Aerospace Composites Market Is Expected To Reach US$ 36.20 Bn By 2025: Growing Aerospace Industry Coupled With Increasing Use Of Composites In Aircraft Is Propelling The Market Growth

The latest market report published by Credence Research, Inc. “Aerospace Composites Market, By Fiber Type (Carbon Fiber, Glass Fiber, Aramid, and Others), By Application (Interior and Exterior Application), By Resin type (Epoxy, Phenolic, Polyester, Polyamide, and Thermoplastics) and By Region (North America, Europe, Asia Pacific, Latin America, & Middle East & Africa) – Growth, Future Prospects and Competitive Analysis, 2017 – 2025,” the global aerospace composites market was valued at US$ 18.5 Bn in 2016, and is expected to reach US$ 36.20 Bn by 2025, expanding at a CAGR of 8.5% from 2017 to 2025.

Browse the full report at http://www.credenceresearch.com/report/aerospace-composites-market

Market Insights

Aerospace industry has now switched to composite from plastics due to their attractive mechanical and chemical properties. Aerospace composites finds its application in commercial & business aircraft, helicopters, and space sectors. Factors such as increasing air passenger traffic, rising use of composite in emerging economies, and increase in number of low cost carriers are driving the market for aerospace composites market.

Composite materials offer several benefits such as lightweight structures which enables weight savings, lower fuel consumption and greater range which enhance the performance of the aircraft. Hence, composites are becoming increasingly popular among the market players for innovative methods of delivering additional value to its end-users. However, high cost of raw material and production and non-biodegradability aspect of reinforced polymers is expected to restrain market growth in coming years.

Download Sample Report @ http://www.credenceresearch.com/sample-request/58621

Competitive Insights

Aerospace composite market is extremely fragmented and competitive in nature with large number of players operating in this market. With the rise in the use of composite in the aerospace industry players are focusing on providing customized and innovative solution to the giants such as Airbus and Boeing to gain the competitive edge. Major players such as Solvay group, Hexcel Corp, Teijin and other are focusing on enhancement of their market in several emerging economies and attract a high volume of consumers during the forecast period. Joint ventures and Merger and acquisition are the major strategies which are being used by several players for the expansion.

Key Trends

– Economies such as Europe and Asia-Pacific offers huge growth potential

– Decrease in Manufacturing and Assembling Cost of aircrafts

 

Powered by WPeMatico