Exactly how Artificial Intelligence Advertising and marketing is Transforming the Game

Expert system advertising takes points one action further compared to SEO practices and so on: with artificial intelligence, AI can discover and change formulas to work extra efficiently. This means that as you make use of an AI-powered application, it progresses at its work. And many thanks to natural language handling, users can engage with artificial intelligence based devices just as they would with a human.

Utilizing Artificial Intelligence to Develop Actual Relationships

Retention Scientific research (RS) is a B-to-B Artificial Intelligence advertising modern technology that aids sellers and also brands recognize, involve as well as preserve their clients. It accurately predicts customer behavior and utilizes those understandings to conduct one-to-one e-mail, web site as well as mobile advertising projects at range to raise conversion rates and earnings. Established in 2013 as well as headquartered in Los Angeles, Retention Science powers advocate Target, Dollar Shave Club, The Honest Business, BCBG, Wet Seal, and also many other innovative ecommerce brand names.

Consumer Advertising 2017

We are going into a brand-new age of advertising and marketing– the age of Expert system Marketing (PURPOSE)– a period where machines run 1,000’s of recursive examinations and also handle the mathematical optimization of client value development, yet the marketing professional remains in control while investing even more time being critical and creative. This session will explore the foundations of OBJECTIVE and also check out real-world usage situations where consumers from markets like gaming, telco, as well as banking have realized material growth in consumer worth metrics consisting of customer retention and also typical earnings per user (ARPU).

Artificial Intelligence Advertising and marketing (AIM).

Did I claim 100 lessons? I suggested 4. Why 4? It fits. Lesson number four. Good guys do win, under promising and over delivering does work, and also artificial intelligence advertising and marketing is actual. OK, that is three lessons slammed into one sentence, as well as one of them is a saying, however once again, this is my blog site, I get to compose exactly what I desire. I eagerly anticipate jumping on stage, ordering the mic, and pitching. My fixation with marketing innovation is absolutely obvious. Over the last years, I have continuouslied be among the most active financiers in the area. This will certainly be enjoyable. Let the change begin. All set to embrace tomorrow.

3 Reasons that Artificial Intelligence Advertising and marketing is Here to Keep|WGN Radio – 720 AM.

Exactly what was when considered as the content of sci-fi movies, expert system seems far more of a fact than formerly expected. Artificial intelligence marketing can play such a substantial duty in the growth of brand analysis and consumer interactions. In between belief analysis, client service chances, and advertising and marketing optimization, expert system permits marketing experts to get a much better understanding of their customer base.

Superaccurate GPS Chips Coming to Smartphones in 2018

Broadcom has released the first mass-market GPS chips that use newer satellite signals to boost accuracy to 30 centimeters

Illustration: Miguel Navarro/Getty Images
We’ve all been there. You’re driving down the highway, just as Google Maps instructed, when Siri tells you to “proceed east for one-half mile, then merge onto the highway.” But you’re already on the highway. After a moment of confusion and perhaps some rude words about Siri and her extended AI family, you realize the problem: Your GPS isn’t accurate enough for your navigation app to tell if you’re on the highway or on the road beside it.

Those days are nearly at an end. At the ION GNSS+ conference in Portland, Ore., today Broadcom announced that it is sampling the first mass-market chip that can take advantage of a new breed of global navigation satellite signals and will give the next generation of smartphones 30-centimeter accuracy instead of today’s 5 meters. Even better, the chip works in a city’s concrete canyons, and it consumes half the power of today’s generation of chips. The chip, the BCM47755, has been included in the design of some smartphones slated for release in 2018, but Broadcom would not reveal which.

GPS and other global navigation satellite systems (GNSSs), such as Europe’s Galileo, Japan’s QZSS, and Russia’s Glonass, allow a receiver to determine its position by calculating its distance from three or more satellites. All GNSS satellites—even the oldest generation still in use—broadcast a message called the L1 signal, which includes the satellite’s location, the time, and an identifying signature pattern. A newer generation broadcasts a more complex signal called L5 at a different frequency in addition to the legacy L1 signal. The receiver essentially uses these signals to fix its distance from each satellite based on how long it takes the signal to go from satellite to receiver.

Broadcom’s receiver first locks onto the satellite with the L1 signal and then refines its calculated position with L5. The latter is superior, especially in cities, because it is much less prone to distortions from multipath reflections than L1.

In a city, the satellite’s signals reach the receiver both directly and by bouncing off of one or more buildings. The direct signal and any reflections arrive at slightly different times, and if they overlap, they add up to form a sort of signal blob. The receiver is looking for the peak of that blob to fix the time of arrival. But the messier the blob, the less accurate that fix, and the less accurate the final calculated position will be.

However, L5 signals are so brief that the reflections are unlikely to overlap with the direct signal. The receiver chip can simply ignore any signal after the first one it receives, which is the direct path. The Broadcom chip also uses information in the phase of the carrier signal to further improve accuracy.

Though there are advanced systems that use L5 on the market now, these are generally for industrial purposes, such as oil and gas exploration. Broadcom’s BCM47755 is the first mass-market chip that uses both L1 and L5.

Why is this only happening now? “Up to now there haven’t been enough L5 satellites in orbit,” says Manuel del Castillo, associate director of GNSS product marketing at Broadcom. At this point, there are about 30 such satellites in orbit, counting a set that only flies over Japan and Australia. Even in a city’s “narrow window of sky you can see six or seven, which is pretty good,” Del Castillo says. “So now is the right moment to launch.”

Broadcom had to get the improved accuracy to work within a smartphone’s limited power budget. Fundamentally, that came down to three things: moving to a more power-efficient 28-nanometer-chip manufacturing process, adopting a new radio architecture (which Broadcom would not disclose the details of), and designing a power-saving dual-core sensor hub. In total, they add up to a 50 percent power savings over Broadcom’s previous, less accurate chip. 

In smartphones, sensor hubs take the raw data from the system’s sensors and process it to provide only the information the phone’s applications processor needs, thereby taking the computational burden and its accompanying power draw off of the applications processor. For instance, a sensor hub might monitor the accelerometer looking for signs that you had flipped your phone’s orientation from vertical to horizontal. It would then just send the applications processor the equivalent of the word “horizontal” instead of a stream of complex accelerations.

The sensor hub in the BCM47755 takes advantage of the ARM’s “big.LITTLE” design—a dual-core architecture in which a simple low-power processor core is paired with a more complex core. The low-power core, in this case an ARM Cortex M-0, handles simple, continuous tasks. The more powerful but power-hungry core, a Cortex M-4, comes in only when it’s needed.

The BCM4775 is just the latest development in a global push for centimeter-level navigation accuracy. Bosch, Geo++, Mitsubishi Electric, and U-blox established a joint venture called Sapcorda Services in August, to provide centimeter-level accuracy. Sapcorda seems to depend on using ground stations to measure errors in GPS and Galileo satellite signals due to atmospheric distortions. Those measurements would then be sent to receivers in handsets and other systems to improve accuracy.

Japan’s US $1.9 billion Quasi-Zenith Satellite System (QZSS) also relies on error correction, but it additionally improves on urban navigation by adding a set of satellites that guarantees one is visible directly overhead even in the densest part of Tokyo. The third of those four satellites launched in August. A fourth is planned for October, and the system is to come online in 2018.

Powered by WPeMatico

    Life 3.0 Artificial emotional intelligence

  • SyncLeads WL Agency 200 3rd Generation Lead Ads Tech - FACEBOOK APPROVED
  • Fun Fitness Downsell offer Includes a pack of PLR from my store, with topics:At-Home Workouts - 8 articles Barre Fitness - 5 articles Hiking for Weight Loss - 10 articles Intro to Pilates - 10 articles Hatha yoga bundle - 15 articles, 6 page repor

New leadership for MIT-IBM Watson AI Lab

Antonio Torralba has been named MIT director of the MIT-IBM Watson AI Lab effective immediately, announced Anantha Chandrakasan, dean of the MIT School of Engineering, today.

An expert in computer vision, machine learning, and human visual perception, Torralba is a professor in the Department of Electrical Engineering and Computer Science and a principal investigator at the Computer Science and Artificial Intelligence Laboratory. His projects span a wide range — from investigating object recognition and scene understanding in pictures and movies, to studying the inner workings of deep neural networks, to building models of human vision and cognition, to the development of applications and systems such as Pic2Recipe that can look at a photo of food, predict the ingredients, and suggest similar recipes. He is also an enthusiastic investigator of the intersections between visual art and computation.

“As the inaugural MIT director of our collaboration with IBM, Antonio will closely collaborate with IBM leadership and lab researchers to design and implement the lab’s ambitious research agenda,” said Chandrakasan, who is also the Vannevar Bush Professor of Electrical Engineering and Computer Science. “He is an accomplished scholar and a creative thinker with deep experience and a broad range of research interests in AI. I look forward to working with Antonio as we shape this exciting endeavor.”

Torralba is an associate editor of the International Journal in Computer Vision and served as program chair for the Computer Vision and Pattern Recognition conference in 2015. He received the 2008 National Science Foundation Career award, the best student paper award at the IEEE Conference on Computer Vision and Pattern Recognition in 2009, and the 2010 J.K. Aggarwal Prize from the International Association for Pattern Recognition. In 2017, he received the Frank Quick Faculty Research Innovation Fellowship and the Louis D. Smullin (’39) Award for Teaching Excellence. He earned a degree in telecommunications engineering from Telecom BCN in Spain, in 1994, and his PhD in signal, image, and speech processing from the National Polytechnic Institute of Grenoble in France, in 2000.   

“I am delighted by the appointment of Antonio Torralba as MIT director of the MIT-IBM Watson AI Lab,” said Dario Gil, vice president of AI and IBM Q at IBM Research, who, along with Chandrakasan, oversees the MIT-IBM collaboration. “He brings a unique combination of deep technical excellence, intellectual curiosity, and enthusiasm — which I hope become hallmarks of our collaboration. I look forward to working closely with Antonio and the joint teams across MIT and IBM to kick off what I know will be a tremendously successful collaboration.”

Torralba and the IBM director will lead the MIT-IBM Watson AI Lab, a $240 million investment by IBM in AI efforts over the next 10 years, with $90 million dedicated to supporting MIT research. They will co-chair a committee comprised of equal numbers of MIT faculty and IBM researchers. This committee will review and select proposals for funding and provide strategic direction to the lab. The initial areas of joint research between MIT and IBM will be core AI algorithms, the physics of AI, the application of AI to industries, and advancing shared prosperity through AI.   

Torralba and IBM are moving quickly to engage with researchers from MIT and IBM to get the lab’s first round of research projects initiated and underway. They have established a series of upcoming events through which MIT principal investigators and IBM research staff can meet, learn more about the lab, and discuss opportunities for collaboration. 

For more information, visit mitibmwatsonailab.mit.edu.

<

p class=”wpematico_credit”>Powered by WPeMatico

In 10 Minutes, I’ll Give You The Truth About Artificial intelligence

Facebook’s New Device Is Aiding Scientist Build AI We Could Have Meaningful Conversations

Apple’s Siri and also various other rivals like the Google Aide, Microsoft’s Cortana, as well as Amazon.com’s Alexa have actually been around for some time, enough time for us to obtain made use of to them. As well as though we could utilize them on as well as off, speaking to our smart devices to achieve jobs isn’t really silky smooth right now. Facebook wishes to throw down the gauntlet, and also the social titan has a gameplan in position.

” Fixing dialog stays a lasting difficulty for AI, as well as any kind of progression towards that objective will likely have temporary advantages in regards to items that we could construct today or the advancement of modern technologies that can be helpful in various other locations,” the business claimed in an article.

Just what the social media is trying in order to help develop with its brand-new system is an AI with the ability of integrating both. The suggestion is to establish a chatbot that could not just remember your choices with time, yet likewise utilize them in purposeful discussions constructively, rather than equally as context.

ParlAI isn’t really for small-time programmers nonetheless; it’s targeted at sophisticated research study in the area. Naturally, several of that shared understanding will certainly additionally at some point makes its means right into Facebook’s items with time also.

The conversational AI from Her
A STILL FROM THE FLICK HER, SHOWCASING A GENUINELY CONVERSATIONAL VOICE-BASED AI
Inning accordance with Facebook, there presently exist 2 primary sorts of conversational AI– those like Siri and also Google Aide that you speak with in order to offer it directions, and also others that offer no function apart from enjoyment.

As it stands today, electronic aides are sterilized, doing not have an actual character apart from the jokes tough coded right into them. So Facebook today introduced a brand-new research study device it’s been collaborating with, to assist AI designers construct devices be a lot more with the ability of holding a lucid, organized discussion with people.

” ParlAI is a system we wish will certainly combine the neighborhood of scientists dealing with AI representatives that carry out dialog and also proceed pressing the modern in dialog study.”

Called ParlAI (obvious “par-lay”), the social media sites network defines it as a “one-stop purchase dialog study.” Not just does it give AI programmers and also scientists with a training as well as screening structure for their chatbots, it additionally serves as a database for them to share their approaches with various other programmers, speeding up along our study right into practical AI. Furthermore, the system linkeds into Amazon.com’s Mechanical Turk, to give programmers with accessibility to work with human beings to engage with, examination, as well as remedy their chatbots, an important component of the understanding procedure

    Life 3.0 Artificial emotional intelligence

  • Monetizing Dragon Guru Start making money FROM HOME today!
  • SociConnect Basic SociConnect is a "One of a Kind" 1-Click Solution to bring Authority Content from ANY Facebook Fan Page. Boosting your Sites/Blogs Activity which Google Loves and Rewards you for with FREE Niche-Specific Traffic.

Identifying optimal product prices

How can online businesses leverage vast historical data, computational power, and sophisticated machine-learning techniques to quickly analyze and forecast demand, and to optimize pricing and increase revenue?

A research highlight article in the Fall 2017 issue of MIT Sloan Management Review by MIT Professor David Simchi-Levi describes new insights into demand forecasting and price optimization.

Algorithm increases revenue by 10 percent in six months

Simchi-Levi developed a machine-learning algorithm, which won the INFORMS Revenue Management and Pricing Section Practice Award, and first implemented it at online retailer Rue La La.

The initial research goal was to reduce inventory, but what the company ended up with was “a cutting-edge, demand-shaping application that has a tremendous impact on the retailer’s bottom line,” Simchi-Levi says.

Rue La La’s big challenge was pricing on items that have never been sold before and therefore required a pricing algorithm that could set higher prices for some first-time items and lower prices for others.

Within six months of implementing the algorithm, it increased Rue La La’s revenue by 10 percent.

Forecast, learn, optimize

Simchi-Levi’s process involves three steps for generating better price predictions:

The first step involves matching products with similar characteristics to the products to be optimized. A relationship between demand and price is then predicted with the help of a machine-learning algorithm.

The second step requires testing a price against actual sales, and adjusting the product’s pricing curve to match real-life results.  

In the third and final step, a new curve is applied to help optimize pricing across many products and time periods.

Predicting consumer demand at Groupon

Groupon has a huge product portfolio and launches thousands of new deals every day, offering them for only a short time period. Since Groupon has such a short sales period, predicting demand was a big problem and forecasting near impossible.

Applying Simchi-Levi’s approach to this use case began by generating multiple demand functions. By then applying a test price and observing customers’ decisions, insights were gleaned on how much was sold — information that could identify the demand function closest to the level of sales at the learning price. This was the final demand-price function used, and it was used as the basis for optimizing price during the optimization period.

Analysis of the results from the field experiment showed that this new approach increased Groupon’s revenue by about 21 percent but had a much bigger impact on low-volume deals. For deals with fewer bookings per day than the median, the average increase in revenue was 116 percent, while revenue increased only 14 percent for deals with more bookings per day than the median.

Potential to disrupt consumer banking and insurance

The ability to automate pricing enables companies to optimize pricing for more products than most organizations currently find possible. This method has also been used for a bricks-and-mortar application by applying the method to a company’s promotion and pricing, in various retail channels, with similar results.

“I am very pleased that our pricing algorithm can achieve such positive results in a short timeframe,” Simchi-Levi says. “We expect that this method will soon be used not only in retail but also in the consumer banking industry. Indeed, my team at MIT has developed related methods that have recently been applied in the airline and insurance industries.”

<

p class=”wpematico_credit”>Powered by WPeMatico

Engineers design drones that can stay aloft for five days

In the event of a natural disaster that disrupts phone and Internet systems over a wide area, autonomous aircraft could potentially hover over affected regions, carrying communications payloads that provide temporary telecommunications coverage to those in need.

However, such unpiloted aerial vehicles, or UAVs, are often expensive to operate, and can only remain in the air for a day or two, as is the case with most autonomous surveillance aircraft operated by the U.S. Air Force. Providing adequate and persistent coverage would require a relay of multiple aircraft, landing and refueling around the clock, with operational costs of thousands of dollars per hour, per vehicle. 

Now a team of MIT engineers has come up with a much less expensive UAV design that can hover for longer durations to provide wide-ranging communications support. The researchers designed, built, and tested a UAV resembling a thin glider with a 24-foot wingspan. The vehicle can carry 10 to 20 pounds of communications equipment while flying at an altitude of 15,000 feet. Weighing in at just under 150 pounds, the vehicle is powered by a 5-horsepower gasoline engine and can keep itself aloft for more than five days — longer than any gasoline-powered autonomous aircraft has remained in flight, the researchers say.

The team is presenting its results this week at the American Institute of Aeronautics and Astronautics Conference in Denver, Colorado. The team was led by R. John Hansman, the T. Wilson Professor of Aeronautics and Astronautics; and Warren Hoburg, the Boeing Assistant Professor of Aeronautics and Astronautics. Hansman and Hoburg are co-instructors for MIT’s Beaver Works project, a student research collaboration between MIT and the MIT Lincoln Laboratory.

A solar no-go

Hansman and Hoburg worked with MIT students to design a long-duration UAV as part of a Beaver Works capstone project — typically a two- or three-semester course that allows MIT students to design a vehicle that meets certain mission specifications, and to build and test their design.

In the spring of 2016, the U.S. Air Force approached the Beaver Works collaboration with an idea for designing a long-duration UAV powered by solar energy. The thought at the time was that an aircraft, fueled by the sun, could potentially remain in flight indefinitely. Others, including Google, have experimented with this concept,  designing solar-powered, high-altitude aircraft to deliver continuous internet access to rural and remote parts of Africa.

But when the team looked into the idea and analyzed the problem from multiple engineering angles, they found that solar power — at least for long-duration emergency response — was not the way to go.

“[A solar vehicle] would work fine in the summer season, but in winter, particularly if you’re far from the equator, nights are longer, and there’s not as much sunlight  during the day. So you have to carry more batteries, which adds weight and makes the plane bigger,” Hansman says. “For the mission of disaster relief, this could only respond to disasters that occur in summer, at low latitude. That just doesn’t work.”

The researchers came to their conclusions after modeling the problem using GPkit, a software tool developed by Hoburg that allows engineers to determine the optimal design decisions or dimensions for a vehicle, given certain constraints or mission requirements.

This method is not unique among initial aircraft design tools, but unlike these tools, which take into account only several main constraints, Hoburg’s method allowed the team to consider around 200 constraints and physical models simultaneously, and to fit them all together to create an optimal aircraft design.

“This gives you all the information you need to draw up the airplane,” Hansman says. “It also says that for every one of these hundreds of parameters, if you changed one of them, how much would that influence the plane’s performance? If you change the engine a bit, it will make a big difference. And if you change wingspan, will it show an effect?”

Framing for takeoff

After determining, through their software estimations, that a solar-powered UAV would not be feasible, at least for long-duration use in any part of the world, the team performed the same modeling for a gasoline-powered aircraft. They came up with a design that was predicted to stay in flight for more than five days, at altitudes of 15,000 feet, in up to 94th-percentile winds, at any latitude.

In the fall of 2016, the team built a prototype UAV, following the dimensions determined by students using Hoburg’s software tool. To keep the vehicle lightweight, they used materials such as carbon fiber for its wings and fuselage, and Kevlar for the tail and nosecone, which houses the payload. The researchers designed the UAV to be easily taken apart and stored in a FedEx box, to be shipped to any disaster region and quickly reassembled.

This spring, the students refined the prototype and developed a launch system, fashioning a simple metal frame to fit on a typical car roof rack. The UAV sits atop the frame as a driver accelerates the launch vehicle (a car or truck) up to rotation speed — the UAV’s optimal takeoff speed. At that point, the remote pilot would angle the UAV toward the sky, automatically releasing a fastener and allowing the UAV to lift off.

In early May, the team put the UAV to the test, conducting flight tests at Plum Island Airport in Newburyport, Massachusetts. For initial flight testing, the students modified the vehicle to comply with FAA regulations for small unpiloted aircraft, which allow drones flying at low altitude and weighing less than 55 pounds. To reduce the UAV’s weight from 150 to under 55 pounds, the researchers simply loaded it with a smaller ballast payload and less gasoline.

In their initial tests, the UAV successfully took off, flew around, and landed safely. Hoburg says there are special considerations that have to be made to test the vehicle over multiple days, such as having enough people to monitor the aircraft over a long period of time.

“There are a few aspects to flying for five straight days,” Hoburg says. “But we’re pretty confident that we have the right fuel burn rate and right engine that we could fly it for five days.”

“These vehicles could be used not only for disaster relief but also other missions, such as environmental monitoring. You might want to keep watch on wildfires or the outflow of a river,” Hansman adds. “I think it’s pretty clear that someone within a few years will manufacture a vehicle that will be a knockoff of this.”

This research was supported, in part, by MIT Lincoln Laboratory.

<

p class=”wpematico_credit”>Powered by WPeMatico

    Life 3.0 Artificial emotional intelligence

  • Fun Fitness PLR Reports (Upgrade) This is the OTO for the Fun Fitness PLR special. It includes 4 reports and 5 articles. The report topics are:Aerial Yoga Kayaking No-Equipment Workouts Water Workouts
  • BleuPage Pro -> Total WordPress Management Have a blog? don't get the time to update it frequently? Total WordPress management updates your blog on 100 percent Auto-Pilot just select your niche the timing you wan't the posts to come, sit back and enjoy the action.

Giving robots a sense of touch

Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface.

Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.

In one paper, Adelson’s group uses the data from the GelSight sensor to enable a robot to judge the hardness of surfaces it touches — a crucial ability if household robots are to handle everyday objects.

In the other, Russ Tedrake’s Robot Locomotion Group at CSAIL uses GelSight sensors to enable a robot to manipulate smaller objects than was previously possible.

The GelSight sensor is, in some ways, a low-tech solution to a difficult problem. It consists of a block of transparent rubber — the “gel” of its name — one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object’s shape.

The metallic paint makes the object’s surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are three colored lights and a single camera.

“[The system] has colored lights at different angles, and then it has this reflective material, and by looking at the colors, the computer … can figure out the 3-D shape of what that thing is,” explains Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences.

In both sets of experiments, a GelSight sensor was mounted on one side of a robotic gripper, a device somewhat like the head of a pincer, but with flat gripping surfaces rather than pointed tips.

Contact points

For an autonomous robot, gauging objects’ softness or hardness is essential to deciding not only where and how hard to grasp them but how they will behave when moved, stacked, or laid on different surfaces. Tactile sensing could also aid robots in distinguishing objects that look similar.

In previous work, robots have attempted to assess objects’ hardness by laying them on a flat surface and gently poking them to see how much they give. But this is not the chief way in which humans gauge hardness. Rather, our judgments seem to be based on the degree to which the contact area between the object and our fingers changes as we press on it. Softer objects tend to flatten more, increasing the contact area.

The MIT researchers adopted the same approach. Wenzhen Yuan, a graduate student in mechanical engineering and first author on the paper from Adelson’s group, used confectionary molds to create 400 groups of silicone objects, with 16 objects per group. In each group, the objects had the same shapes but different degrees of hardness, which Yuan measured using a standard industrial scale.

Then she pressed a GelSight sensor against each object manually and recorded how the contact pattern changed over time, essentially producing a short movie for each object. To both standardize the data format and keep the size of the data manageable, she extracted five frames from each movie, evenly spaced in time, which described the deformation of the object that was pressed.

Finally, she fed the data to a neural network, which automatically looked for correlations between changes in contact patterns and hardness measurements. The resulting system takes frames of video as inputs and produces hardness scores with very high accuracy. Yuan also conducted a series of informal experiments in which human subjects palpated fruits and vegetables and ranked them according to hardness. In every instance, the GelSight-equipped robot arrived at the same rankings.

Yuan is joined on the paper by her two thesis advisors, Adelson and Mandayam Srinivasan, a senior research scientist in the Department of Mechanical Engineering; Chenzhuo Zhu, an undergraduate from Tsinghua University who visited Adelson’s group last summer; and Andrew Owens, who did his PhD in electrical engineering and computer science at MIT and is now a postdoc at the University of California at Berkeley.

Obstructed views

The paper from the Robot Locomotion Group was born of the group’s experience with the Defense Advanced Research Projects Agency’s Robotics Challenge (DRC), in which academic and industry teams competed to develop control systems that would guide a humanoid robot through a series of tasks related to a hypothetical emergency.

Typically, an autonomous robot will use some kind of computer vision system to guide its manipulation of objects in its environment. Such systems can provide very reliable information about an object’s location — until the robot picks the object up. Especially if the object is small, much of it will be occluded by the robot’s gripper, making location estimation much harder. Thus, at exactly the point at which the robot needs to know the object’s location precisely, its estimate becomes unreliable. This was the problem the MIT team faced during the DRC, when their robot had to pick up and turn on a power drill.

“You can see in our video for the DRC that we spend two or three minutes turning on the drill,” says Greg Izatt, a graduate student in electrical engineering and computer science and first author on the new paper. “It would be so much nicer if we had a live-updating, accurate estimate of where that drill was and where our hands were relative to it.”

That’s why the Robot Locomotion Group turned to GelSight. Izatt and his co-authors — Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering; Adelson; and Geronimo Mirano, another graduate student in Tedrake’s group — designed control algorithms that use a computer vision system to guide the robot’s gripper toward a tool and then turn location estimation over to a GelSight sensor once the robot has the tool in hand.

In general, the challenge with such an approach is reconciling the data produced by a vision system with data produced by a tactile sensor. But GelSight is itself camera-based, so its data output is much easier to integrate with visual data than the data from other tactile sensors.

In Izatt’s experiments, a robot with a GelSight-equipped gripper had to grasp a small screwdriver, remove it from a holster, and return it. Of course, the data from the GelSight sensor don’t describe the whole screwdriver, just a small patch of it. But Izatt found that, as long as the vision system’s estimate of the screwdriver’s initial position was accurate to within a few centimeters, his algorithms could deduce which part of the screwdriver the GelSight sensor was touching and thus determine the screwdriver’s position in the robot’s hand.

“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” says Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity. Current robots lack this type of dexterity and are limited in their ability to react to surface features when manipulating objects. If you imagine fumbling for a light switch in the dark, extracting an object from your pocket, or any of the other numerous things that you can do without even thinking — these all rely on touch sensing.”

“Software is finally catching up with the capabilities of our sensors,” Levine adds. “Machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties. In the future, we will see these kinds of learning methods incorporated into end-to-end trained manipulation skills, which will make our robots more dexterous and capable, and maybe help us understand something about our own sense of touch and motor control.”

<

p class=”wpematico_credit”>Powered by WPeMatico