Featured

WELCOME TO MY TECHNICAL UNIVERSE the physics of cognitive systems

I used to have a description of each of my papers on this page, but it got very boring to read as the numbers grew, so I moved most of it to here. After graduate work on the role of atomic and molecular chemistry in cosmic reionization, I have mainly focused my research on issues related to constraining cosmological models. A suite of papers developed methods for analyzing cosmological data sets and applied them to various CMB experiments and galaxy redshift surveys, often in collaboration with the experimentalists who had taken the data. Another series of papers tackled various “dirty laundry” issues such as microwave foregrounds and mass-to-light bias. Other papers like this one develop and apply techniques for clarifying the big picture in cosmology: comparing and combining diverse cosmological probes, cross-checking for consistency and constraining cosmological models and their free parameters. (The difference between cosmology and ice hockey is that I don’t get penalized for cross-checking…) My main current research interest is cosmology theory and phenomenology. I’m particularly enthusiastic about the prospects of comparing and combining current and upcoming data on CMB, LSS, galaxy clusters, lensing, LyA forest clustering, SN 1, 21 cm tomography, etc. to raise the ambition level beyond the current cosmological parameter game, testing rather than assuming the underlying physics. This paper contains my battle cry. I also retain a strong interest in low-level nuts-and-bolts analysis and interpretation of data, firmly believing that the devil is in the details, and am actively working on neutral hydrogen tomography theory, experiment and data analysis for our Omniscope project, which you can read all about here.

OTHER RESEARCH: SIDE INTERESTS Early galaxy formation and the end of the cosmic dark ages One of the main challenges in modern cosmology is to quantify how small density fluctuations at the recombination epoch at redshift around z=1000 evolved into the galaxies and the large-scale structure we observe in the universe today. My Ph.D. thesis with Joe Silk focused on ways of probing the interesting intermediate epoch. The emphasis was on the role played by non-linear feedback, where a small fraction of matter forming luminous objects such as stars or QSO’s can inject enough energy into their surrounding to radically alter subsequent events. We know that the intergalactic medium (IGM) was reionized at some point, but the details of when and how this occurred remain open. The absence of a Gunn-Peterson trough in the spectra of high-redshift quasars suggests that it happened before z=5, which could be achieved through supernova driven winds from early galaxies. Photoionization was thought to be able to partially reionize the IGM much earlier, perhaps early enough to affect the cosmic microwave background (CMB) fluctuations, especially in an open universe. However, extremely early reionization is ruled out by the COBE FIRAS constraints on the Compton y-distortion. To make predictions for when the first objects formed and how big they were, you need to worry about something I hate: molecules. Although I was so fed up with rate discrepancies in the molecule literature that I verged on making myself a Ghostbuster-style T-shirt reading “MOLECULES – JUST SAY NO”, the irony is that my molecule paper that I hated so much ended up being one of my most cited ones. Whereas others that I had lots of fun with went largely unnoticed…

Math problemsI’m also interested in physics-related mathematics problems in general. For instance, if you don’t believe that part of a constrained elliptic metal sheet may bend towards you if you try to push it away, you are making the same mistake that the famous mathematician Hadamard once did.

WELCOME TO MY TECHNICAL UNIVERSE I love working on projects that involve cool questions, great state-of-the-art data and powerful physical/mathematical/computational tools. During my first quarter-century as a physics researcher, this criterion has lead me to work mainly on cosmology and quantum information. Although I’m continuing my cosmology work with the HERA collaboration, the main focus of my current research is on the physics of cognitive systems: using physics-based techniques to understand how brains works and to build better AI (artificial intelligence) systems. If you’re interested in working with me on these topics, please let me know, as I’m potentially looking for new students and postdocs (see requirements). I’m fortunate to have collaborators who generously share amazing neuroscience data with my group, including Ed Boyden, Emery Brown and Tomaso Poggio at MIT and Gabriel Kreimann at Harvard, and to have such inspiring colleagues here in our MIT Physics Department in our new division studying the physics of living systems. I’ve been pleasantly surprised by how many data analysis techniques I’ve developed for cosmology can be adapted to neuroscience data as well. There’s clearly no shortage of fascinating questions surrounding the physics of intelligence, and there’s no shortage of powerful theoretical tools either, ranging from neural network physics and non-equilibrium statistical mechanics to information theory, the renormalization group and deep learning. Intriguingly and surprisingly, there’s a duality between the last two. I recently helped organize conferences on the physics of information and artificial intelligence. I’m very interested in the question of how to model an observer in physics, and if simple necessary conditions for a physical system being a conscious observer can help explain how the familiar object hierarchy of the classical world emerges from the raw mathematical formalism of quantum mechanics. Here’s a taxonomy of proposed consciousness measures. Here’s a TEDx-talk of mine about the physics of consciousness. Here’s an intriguing connection between critical behavior in magnets, language, music and DNA. In older work of mine on the physics of the brain, I showed that neuron decoherence is way too fast for the brain to be a quantum computer. However, it’s nonetheless interesting to study our brains as quantum systems, to better understand why they perceives the sort of classical world that they do. For example, why do we feel that we live in real space rather than Fourier space, even though both are equally valid quantum descriptions related by a unitary transformation?

Quantum information My work on the physics of cognitive systems is a natural outgrowth of my long-standing interest in quantum information, both for enabling new technologies such as quantum computing and for shedding new light on how the world fundamentally works. For example, I’m interested in how the second law of thermodynamics can be generalized to explain how the entropy of a system typically decreases while you observe a system and increases while you don’t, and how this can help explain how inflation causes the emergence of an arrow of time. When you don’t observe an interacting system, you can get decoherence, which I had the joy of rediscovering as a grad student – if you’d like to know more about what this is, check out my article in with John Archibald Wheeler in Scientific American here. I’m interested in decoherence both for its quantitative implications for quantum computing etc and for its philosophical implications for the interpretation of quantum mechanics. For much more on this wackier side of mine, click the banana icon above. Since macroscopic systems are virtually impossible to isolate from their surroundings, a number of quantitative predictions can be made for how their wavefunction will appear to collapse, in good agreement with what we in fact observe. Similar quantitative predictions can be made for models of heat baths, showing how the effects of the environment cause the familiar entropy increase and apparent directionality of time. Intriguingly, decoherence can also be shown to produce generalized coherent states, indicating that these are not merely a useful approximation, but indeed a type of quantum states that we should expect nature to be full of. All these changes in the quantum density matrix can in principle be measured experimentally, with phases and all.

Cosmology My cosmology research has been focused on precision cosmology, e.g., combining theoretical work with new measurements to place sharp constraints on cosmological models and their free parameters. (Skip to here if you already know all this.) Spectacular new measurements are providing powerful tools for this:

So far, I’ve worked mainly on CMB, LSS and 21 cm tomography, with some papers involving lensing, SN Ia and LyAF as well. Why do I find cosmology exciting?(Even if you don’t find cosmology exciting, there are good reasons why you should support physics research.)

  1. There are some very basic questions that still haven’t been answered. For instance,
    • Is really only 5% of our universe made of atoms? So it seems, but what precisely is the weird “dark matter” and “dark energy” that make up the rest?
    • Will the Universe expand forever or end in a cataclysmic crunch or big rip? The smart money is now on the first option, but the jury is still out.
    • How did it all begin, or did it? This is linked to particle physics and unifying gravity with quantum theory.
    • Are there infinitely many other stars, or does space connect back on itself? Most of my colleagues assume it is infinite and the data supports this, but we don’t know yet.
  2. Thanks to an avalanche of great new data, driven by advances in satellite, detector and computer technology, we may be only years away from answering some of these questions.

Satellites Rock! Since our atmosphere messes up most electromagnetic waves coming from space (the main exceptions being radio waves and visible light), the advent of satellites has revolutionized our ability to photograph the Universe in microwaves, infrared light, ultraviolet light, X-rays and gamma rays. New low-temperature detectors have greatly improved what can be done from the ground as well, and the the computer revolution has enabled us to gather and process huge data quantities, doing research that would have been unthinkable twenty years ago. This data avalanche has transformed cosmology from being a mainly theoretical field, occasionally ridiculed as speculative and flaky, into a data-driven quantitative field where competing theories can be tested with ever-increasing precision. I find CMB, LSS, lensing, SN Ia, LyAF, clusters and BBN to be very exciting areas, since they are all being transformed by new high-precision measurements as described below. Since each of them measures different but related aspects of the Universe, they both complement each other and allow lots of cross-checks. What are these cosmological parameters?Cosmic matter budget In our standard cosmological model, the Universe was once in an extremely dense and hot state, where things were essentially the same everywhere in space, with only tiny fluctuations (at the level of 0.00001) in the density. As the Universe expanded and cooled, gravitational instability caused these these fluctuations to grow into the galaxies and the large-scale structure that we observe in the Universe today. To calculate the details of this, we need to know about a dozen numbers, so-called cosmological parameters. Most of these parameters specify the cosmic matter budget, i.e., what the density of the Universe is made up of – the amounts of the following ingredients:

  • Baryons – the kind of particles that you and I and all the chemical elements we learned about in school are madeof : protons & neutrons. Baryons appear to make up only about 5% of all stuff in the Universe.
  • Photons – the particles that make uplight. Their density is the best measured one on this list.
  • Massive neutrinos – neutrinos are very shy particles. They are known to exist, and now at least two of the three or more kinds are known to have mass.
  • Cold dark matter – unseen mystery particles widely believed to exist. There seems to be about five times more of this strange stuff than baryons, making us a minority in the Universe.
  • Curvature – if the total density differs from a certain critical value, space will be curved. Sufficiently high density would make space be finite, curving back on itself like the 3D surface of a 4D hypersphere.
  • Dark energy – little more than a fancy name our ignorance of what seems to make up abouttwo thirdsof the matter budget. One popular candidates is a “Cosmological constant”, a.k.a. Lambda, which Einstein invented and then later called his greatest blunder. Other candidates are more complicated modifications toEinsteinstheory of Gravity as well as energy fields known as “quintessence”. Dark energy causes gravitational repulsion in place of attraction. Einstein invented it and called it his greatest mistake, but combining new SN Ia and CMB data indicates that we might be living with Lambda after all.

Then there are a few parameters describing those tiny fluctuations in the early Universe; exactly how tiny they were, the ratio of fluctuations on small and large scales, the relative phase of fluctuations in the different types of matter, etc. Accurately measuring these parameters would test the most popular theory for the origin of these wiggles, known as inflation, and teach us about physics at much higher energies than are accessible with particle accelerator experiments. Finally, there are a some parameters that Dick Bond, would refer to as “gastrophysics”, since they involve gas and other ghastly stuff. One example is the extent to which feedback from the first galaxies have affected the CMB fluctuations via reionization. Another example is bias, the relation between fluctuations in the matter density and the number of galaxies.One of my main current interests is using the avalanche of new data to raise the ambition level beyond cosmological parameters, testing rather than assuming the underlying physics. My battle cry is published here with nuts and bolts details here and here. The cosmic toolboxHere is a brief summary of some key cosmological observables and what they can teach us about cosmological parameters.

Photos of the cosmic microwave background (CMB) radiation like the one to the left show us the most distant object we can see: a hot, opaque wall of glowing hydrogen plasma about 14 billion light years away. Why is it there? Well, as we look further away, we’re seeing things that happened longer ago, since it’s taken the light a long time to get here. We see the Sun as it was eight minutes ago, the Andromeda galaxy the way it was a few million years ago and this glowing surface as it was just 400,000 years after the Big Bang. We can see that far back since the hydrogen gas that fills intergalactic space is transparent, but we can’t see further, since earlier the hydrogen was so hot that it was an ionized plasma, opaque to light, looking like a hot glowing wall just like the surface of the Sun. The detailed patterns of hotter and colder spots on this wall constitute a goldmine of information about the cosmological parameters mentioned above. If you are a newcomer and want an introduction to CMB fluctuations and what we can learn from them, I’ve written a review here. If you don’t have a physics background, I recommend the on-line tutorials by Wayne Hu and Ned Wright. Two new promising CMB fronts are opening up — CMB polarization and arcminute scale CMB, and are likely to keep the CMB field lively for at leastr another decade. Hydrogen tomography Mapping our universe in 3D by imaging the redshifted 21 cm line from neutral hydrogen has the potential to overtake the cosmic microwave background as our most powerful cosmological probe, because it can map a much larger volume of our Universe, shedding new light on the epoch of reionization, inflation, dark matter, dark energy, and neutrino masses. For this reason, my group built MITEoR, a pathfinder low-frequency radio interferometer whose goal was to test technologies that greatly reduce the cost of such 3D mapping for a given sensitivity. MITEoR accomplished this by using massive baseline redundancy both to enable automated precision calibration and to cut the correlator cost scaling from N2 to N log N, where N is the number of antennas. The success of MITEoR with its 64 dual-polarization elements bodes well for the more ambitious HERA project, which incorporates many of the technologies MITEoR tested using dramatically larger collecting area

.Galaxy cluster Large-scale structure: 3D mapping of the Universe with galaxy redshift surveys offers another window on dark matter properties, through its gravitational effects on galaxy clustering. This field is currently being transformed by everr larger Galaxy Redshift Survey. I’ve had lots of fun working with my colleagues on the Sloan Digital Sky Survey (SDSS) to carefully analyze the gargantuan galaxy maps and work out what they tell us about our cosmic composition, origins and ultimate fate. The abundance of galaxy clusters, the largest gravitationally bound and equilibrated blobs of stuff in the Universe, is a very sensitive probe of both the cosmic expansion history and the growth of matter clustering. Many powerful cluster finding techniques are contributing to rapid growth in the number of known clusters and our knowledge of their properties: identifying them in 3D galaxy surveys, seeing their hot gas as hot spots in X-ray maps or cold spots in microwave maps (the so-called SZ-effect) or spotting their gravitational effects with gravitational lensing.Gravitational lensing Yet another probe of dark matter is offered by gravitational lensing, whereby its gravitational pull bends light rays and distorts images of distant objects. The first large-scale detections of this effect were reported by four groups (astro-ph/0002500, 0003008, 0003014, 0003338) in the year 2000, and I anticipate making heavy use of such measurements as they continue to improve, partly in collaboration with Bhuvnesh Jain at Penn. Lensing is ultimately as promising as CMB and is free from the murky bias issues plaguing LSS and LyAF measurements, since it probes the matter density directly via its gravitational pull. I’ve also dabbled some in the stronger lensing effects caused by galaxy cores, which offer additional insights into the detailed nature of the dark matter.Supernovae Ia: Supernovae If a white dwarf (the corpse of a burned-out low-mass star like our Sun) orbits another dying star, it may gradually steal its gas and exceed the maximum mass with which it can be stable. This makes it collapse under its own weight and blow up in a cataclysmic explosion called a supernova of type Ia. Since all of these cosmic bombs weigh the same when they go off (about 1.4 solar masses, the so-called Chandrasekhar mass), they all release roughly the same amount of energy – and a more detailed calibration of this energy is possible by measuring how fast it dims, making it the best “standard candle” visible at cosmological distances. The supernova cosmology project and the high z SN search team mapped out how bright SN Ia looked at different redshifts found the first evidence in 1998 that the expansion of the Universe was accelerating. This approach can ultimately provide a direct measurement of the density of the Universe as a function of time, helping unravel the nature of dark energy – I hope the SNAP project or one of its competitores gets funded. The image to the left resulted from a different type of supernova, but I couldn’t resist showing it anyway..

.Lyman Alpha Forest The so-called Lyman Alpha Forest, cosmic gas clouds backlit by quasars, offers yet another new and exciting probe of how dark has clumped ordinary matter together, and is sensitive to an epoch when the Universe was merely 10-20% of its present age. Although relating the measured absorption to the densities of gas and dark matter involves some complications, it completely circumvents the Pandora’s of galaxy biasing. Cosmic observations are rapidly advancing on many other fronts as well, e.g., with direct measurements of the cosmic expansion rate and the cosmic baryon fraction.

Featured

Will Artificial Intelligence Become Conscious?

Ignore today’s small incremental advancements in artificial intelligence, such as the enhancing capabilities of automobiles to drive themselves. Waiting in the wings could be a groundbreaking growth: a machine that knows itself as well as its environments, which might take in and also procedure huge amounts of data in real time. It could be sent on harmful goals, right into room or fight. Along with driving people about, it might be able to prepare, tidy, do washing– as well as keep people company when other people typically aren’t close by.

A specifically innovative set of machines might replace people at essentially all jobs. That would save humankind from workaday grind, yet it would certainly likewise tremble several societal foundations. A life of no job and also only play might turn out to be a dystopia.

Mindful equipments would additionally elevate troubling lawful as well as honest troubles. Would certainly a mindful machine be a “person” under regulation as well as be responsible if its actions hurt someone, or if something goes wrong? To consider an extra frightening circumstance, might these machines rebel against human beings and also desire to eliminate us completely? If yes, they represent the culmination of advancement.

As a teacher of electric design and computer science that operates in artificial intelligence and also quantum theory, I can state that scientists are split on whether these type of hyperaware machines will certainly ever before exist. There’s additionally dispute regarding whether machines might or ought to be called “aware” in the method we think about human beings, as well as some pets, as aware. A few of the questions pertain to innovation; others relate to what consciousness in fact is.

Is Recognition Sufficient?
Many computer researchers assume that awareness is a particular that will certainly emerge as innovation develops. Some believe that awareness includes approving brand-new information, saving as well as fetching old info and cognitive processing of all of it right into assumptions and activities. If that’s right, then one day makers will certainly undoubtedly be the best consciousness. They’ll be able to collect even more information than a human, shop more than numerous collections, accessibility substantial databases in milliseconds and also calculate all of it right into decisions extra complex, and yet a lot more sensible, than any person ever before could.

On the various other hand, there are physicists and also thinkers who claim there’s something more regarding human habits that can not be calculated by a maker. Creative thinking, for example, and also the sense of freedom individuals possess don’t show up ahead from logic or calculations.

Yet these are not the only sights of what consciousness is, or whether devices could ever before accomplish it.

Quantum Views
One more perspective on awareness comes from quantum theory, which is the inmost theory of physics. Inning accordance with the orthodox Copenhagen Analysis, awareness as well as the real world are corresponding aspects of the very same fact. When a person observes, or experiments on, some aspect of the physical world, that individual’s conscious interaction causes noticeable change. Considering that it takes consciousness as a given as well as no effort is made to derive it from physics, the Copenhagen Analysis may be called the “big-C” view of consciousness, where it is a thing that exists on its own– although it needs brains to come to be real. This view was prominent with the pioneers of quantum concept such as Niels Bohr, Werner Heisenberg and Erwin Schrodinger.

The communication in between consciousness and also issue leads to paradoxes that remain unsolved after 80 years of argument. A widely known example of this is the mystery of Schrodinger’s cat, where a pet cat is placed in a scenario that results in it being similarly likely to endure or pass away– and the act of monitoring itself is what makes the end result certain.

The opposing view is that awareness emerges from biology, just as biology itself emerges from chemistry which, consequently, emerges from physics. We call this less extensive concept of consciousness “little-C.” It concurs with the neuroscientists’ sight that the processes of the mind correspond states and processes of the brain. It also agrees with a more recent analysis of quantum theory encouraged by an effort to clear it of mysteries, the Several Worlds Interpretation, where observers belong of the mathematics of physics.

Theorists of science believe that these modern-day quantum physics sights of consciousness have parallels in old philosophy. Big-C resembles the theory of mind in Vedanta– in which consciousness is the essential basis of truth, on the same level with the physical world.

Little-C, on the other hand, is fairly much like Buddhism. Although the Buddha chose not to address the inquiry of the nature of awareness, his fans stated that mind and also consciousness arise out of emptiness or nothingness.

Big-C and Scientific Exploration
Researchers are additionally exploring whether awareness is constantly a computational process. Some scholars have actually suggested that the imaginative moment is not at the end of a calculated computation. For example, fantasizes or visions are intended to have influenced Elias Howe’s 1845 style of the contemporary embroidery device, as well as August Kekule’s exploration of the framework of benzene in 1862.

A dramatic item of proof in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His note pad, which was lost as well as neglected for regarding 50 years and published just in 1988, contains a number of thousand formulas, without evidence in various areas of mathematics, that were well ahead of their time. Moreover, the methods whereby he located the solutions remain elusive. He himself declared that they were revealed to him by a siren while he was asleep.

The principle of big-C consciousness increases the inquiries of exactly how it relates to matter, as well as how matter and also mind mutually affect each other. Awareness alone can not make physical modifications to the world, yet maybe it could change the probabilities in the advancement of quantum procedures. The act of monitoring can freeze and even affect atoms’ motions, as Cornell physicists verified in 2015. This might quite possibly be an explanation of exactly how matter and mind engage.

Mind and Self-Organizing Solutions
It is feasible that the sensation of consciousness calls for a self-organizing system, like the brain’s physical framework. If so, then present makers will come up short.

Scholars aren’t sure if adaptive self-organizing equipments could be designed to be as advanced as the human mind; we lack a mathematical theory of calculation for systems like that. Probably it holds true that just biological machines could be sufficiently imaginative and adaptable. However then that recommends individuals ought to– or quickly will certainly– begin working on engineering brand-new biological frameworks that are, or could end up being, mindful.

Featured

Will we be replaced by robots? — A Better Man

I was asked the question by the head of a new startup for AI as their technology aims to change the world I quote. Will jobs done by regular people be replaced? Fast Company predicts these will be the jobs that will be the worst hit. 1. INSURANCE UNDERWRITERS AND CLAIMS REPRESENTATIVES 2. BANK TELLERS […]

via Will we be replaced by robots? — A Better Man

Featured

Artificial intelligence

Artificial intelligence

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.[2]

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017, include successfully understanding human speech,[5] competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] However, in the early 21st century statistical approaches to machine learning became successful enough to eclipse all other tools, approaches, problems and schools of thought.[11]

The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[14] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[15] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[16] Some people also consider AI a danger to humanity if it progresses unabatedly.[17]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[18]

Three MIT graduate students awarded 2018 Paul and Daisy Soros Fellowships for New Americans

Three MIT graduate students — Sitan Chen, Lillian Chin ’17, and Suchita Nety — are among the 30 recipients of the 2018 Paul and Daisy Soros Fellowships for New Americans. Sylvia Biscoveanu, a recent graduate of Penn State University who will be pursuing a PhD at the MIT Kavli Institute for Astrophysics and Space Research next fall, was also named a Soros Fellow.

The Soros Fellowships provide up to $90,000 funding for graduate studies for immigrants and the children of immigrants. Award winners are selected for their potential to make significant contributions to United States society, culture, or their academic fields. This year, over 1,700 candidates applied to the prestigious fellowship program.

In the past eight years, 29 MIT students and alumni have been awarded Soros Fellowships. Eligible applicants include children of immigrants, naturalized citizens, green card holders, and Deferred Action for Childhood Arrival (DACA) recipients. Beginning in 2019, the fellowship will expand its requirements to include former DACA recipients should the government program be rescinded.

MIT students interested in applying to the Soros Fellowship should contact Kim Benard, assistant dean of distinguished fellowships and academic excellence. The application for the Soros Class of 2019 is now open, and the national deadline is Nov. 1, 2018. 

Sitan Chen

Sitan Chen is a PhD student in electrical engineering and computer science and a member of the MIT Computer Science and Artificial Intelligence Lab (CSAIL) and the Theory of Computation Group. Chen’s award will support work toward his doctorate in computer science.

Born in Hefei, China, Chen was 1 year old when his family immigrated to Canada so that his father could complete his doctorate at the University of Toronto. The family moved to Suwanee, Georgia, in the early 2000s, and Chen’s experiences throughout high school with math contests and programs like the Research Science Institute ultimately motivated him to study mathematics and computer science at Harvard University.

Chen graduated summa cum laude from Harvard in 2016, receiving the Thomas T. Hoopes and Captain Jonathan Fay Prizes for his thesis on geometric aspects of counting complexity and arithmetic complexity. Chen’s mentors in Harvard’s Theory of Computing research group encouraged him to pursue graduate studies in theoretical computer science.

In the fall of 2016, Sitan began his doctoral program in computer science at MIT. His work with PhD advisor Ankur Moitra, professor in the Department of Mathematics and principal investigator at CSAIL, centers on algorithmic problems in machine learning and inference.

Chen is focusing on developing new mathematical frameworks to analyze techniques such as the method of moments, Gibbs sampling, and local search that are popular in practice but poorly understood in theory. He has presented his work at venues including the Symposium on Theory of Computing and the Simons Institute for the Theory of Computing.

Lillian Chin ’17

Lillian Chin graduated from MIT in June 2017 with a bachelor of science degree in electrical engineering and computer science. She continued on to a doctoral program in the department, and her award will support work toward a PhD in electrical engineering and computer science. As a graduate student at MIT, her research interests are in robotics — specifically, integrating versatile hardware design with strong control algorithms.

Chin was born in New York City after her parents left China and Taiwan to pursue graduate school in the United States. Her parents instilled Chin’s love of science by frequently taking her to their lab and explaining their experiments. As she grew older, Chin began pursuing engineering and research more intensely, competing on an international level in the FIRST Robotics Competition and being nationally recognized for bioengineering research through the Intel Science Talent Search.

During her undergraduate career at MIT, Chin further developed her skills in strong interdisciplinary research, creating new materials that could be used to more efficiently move soft robots, and designing a novel manufacturing process that can print tissues and circuits. Chin also was able to pursue summer internships at Apple, Square, and the Toyota Research Institute. And in February 2017, Chin bested thousands of applicants and 14 on-air competitors when she won the 2017 “Jeopardy!” College Championship, representing MIT.

As a graduate student at MIT and a 2018 Hertz Fellow, Chin is currently working on better integrating the mechanical advantages of soft robotics with the latest in learning and planning algorithms. Her ultimate career goal is to become a professor in robotics: designing systems to enable human achievement.

Suchita Nety

Suchita Patil Nety was born in Sunnyvale, California, to immigrants from India who came to the United States to attend graduate school. She draws inspiration from her upbringing in the dynamic and diverse Silicon Valley as well as her grandparents’ experiences as freedom fighters for Indian independence.

Nety’s research projects throughout high school, including cancer imaging research conducted at Stanford, earned regional and national-level awards. In June 2017, she earned a BS in chemistry from Caltech. While there, she spent four years in the lab of chemical engineering professor Mikhail Shapiro. Her work with protein-based reporters for ultrasound imaging resulted in a patent, publications, presentations, and awards, including Caltech’s highest honor for undergraduate academics and research.

Nety is interested in forms of storytelling and healing that complement her future role in medicine. While at Caltech, she pursued her love for literature and obtained an English minor, won writing prizes, tutored in the campus writing center, and volunteered for a literacy nonprofit. She attained professional status in Bharatanatyam, a style of Indian classical dance, and is an avid hip hop choreographer. 

Nety’s award will support work toward an MD/PhD at Harvard Medical School and MIT. After completing this training, Nety hopes to serve patients as a medical oncologist while developing molecular tools to engineer robust and safe cell-based therapies.

<

p class=”wpematico_credit”>Powered by WPeMatico