Nobel Prize Awarded for Detection of Gravity Waves

Three share award for building LIGO and hearing black holes collide

Photo: Molly Riley/AFP/Getty Images
Rainer Weiss, Barry Barish, and Kip Thorne, have been awarded the Nobel Physics Prize 2017 for their gravitational wave work.
The three men who won this year’s Nobel Prize in physics opened up a new field of science and helped prove Einstein correct by detecting gravitational waves from a pair of colliding black holes.

Rainer Weiss, a physicist from the Massachusetts Institute of Technology, will receive half the prize, while California Institute of Technology physicists Barry C. Barish and Kip S. Thorne will spilt the other half. The three were awarded for conceiving and creating the Laser Interferometer Gravitational Wave Observatory, or LIGO.

LIGO consists of two facilities, one in Hanford, Wash., and one in Livingston, La. Each is made up of two 4-kilometer-long tunnels at right angles to each other. Scientists fire laser beams down each tunnel and measure their reflections. When a gravity wave passes through Earth, it compresses space by a small amount, like a ripple in a pond, and produces a tiny perturbation in the light. In September 2015, LIGO detected the ripple produced when two massive black holes spiraled into each other, 1.3 billion light years away.

Just last week, a sister detector in Italy called Virgo announced the discovery of another collision, the fourth reported so far. That was the first to be measured by three detectors—which allows scientists to locate the source in the sky and point other telescopes toward the event.

“It’s a new branch of science, gravitational wave astronomy,” says Sheldon Glashow, a professor of physics at Boston University, who himself won the Nobel in 1979 for contributions to the theory that two fundamental forces of physics, the weak nuclear force and the electromagnetic force, interacted.

Glashow says the Nobel committee did a good job of divvying up the prize, which totals 9 million Swedish krona, or roughly $1.1 million. “They recognized two of the pioneers of the search, together with the person who actually made it happen.”

Weiss and another scientist, Ron Drever of the University of Glasgow, had separately come up with the idea to use lasers to detect gravitational waves in the mid-1970s. Weiss and Thorne figured out how they might make a detector, and Drever joined them in the project. The group had trouble gaining funding from the National Science Foundation, and eventually Drever, who died this past March, was forced out of the project. Glashow says it wasn’t until Barish came in, in 1994, that the project finally went ahead. “NSF was about to cancel it, but then they said ‘Get Barry, and he’ll solve the problem,’” Glashow says.

Weiss, he says, was the one who figured out what kind of sensitivity such a detector would need to be able to detect gravitational waves. “He was the guy who knew what they needed. Barish was the guy who made it happen,” Glashow said. Thorne, meanwhile, was the evangelist, convincing scientists and the public that this was a worthwhile endeavor.

The 2015 detection recorded the merger of two black holes, one with 29 times the mass of the sun and the other with 36 times the sun’s mass. Glashow says that was surprising to physicists, who believed most ordinary black holes would be only two or three solar masses, except for the giant ones at the centers of galaxies, which can be thousands or millions of times as massive. Now, he says, scientists have to figure out what would produce these intermediate black holes.

LIGO is currently shut down, and the detectors are being upgraded to make them more sensitive. When they’re put back online, they should be able to detect events twice as far away, which means covering eight times the volume of space.

One project will involve trying to measure the polarization of cosmic background radiation, the signature left over from the Big Bang. That, says Glashow, could tell scientists something about the nature of primordial black holes formed near the beginning of the universe, about which they know very little. Measuring the polarization of the gravity waves produced by the collapse of black holes “tells you the tiny little details of Einstein’s theory,” he says.

And some people—not Glashow, he points out—think that gravitational studies will give hints about the existence of axions, theoretical particles that, if they exist, may help explain dark matter, one of the biggest mysteries in cosmology today.

Whatever LIGO and similar detectors find, they’re opening up a new field of science, Glashow says. “Every time we open a window—radio astronomy, x-ray astronomy—we find things we didn’t expect,” he says. With gravitational wave astronomy, “we’ve found the things we expected, but we’re beginning to find the things we didn’t expect.”

Powered by WPeMatico

    Life 3.0 Artificial emotional intelligence

  • Ultimate Tarot Reading Course Finally... A revolutionary new approach to learning how to read the Tarot cards fast.
  • Buzzinar Lead Magnet Kit LITE Buzzinar is a viral traffic getting system and software suite. Attract customers build your list and create sales funnels that go viral!

More Teachers, Fewer 3D Printers: How to Improve K–12 Computer Science Education

Tech companies and the U.S. government recently pledged $500 million for STEM programs

Photo: iStockphoto
Last week, the Trump White House announced $200 million in federal funding to improve K–12 computer science education every year. The next day, tech leaders including Amazon, Facebook, Google, Microsoft, and Salesforce added another $300 million to the bag, spread over five years. The goal of that initiative is to support K–12 STEM education, focusing on computer science.

The need for basic computer science education has never been greater. Software and computers drive the economy, aiding mines and farms, as well as retail stores, banks, and healthcare. There are 500,000 computer science job openings in the U.S., spanning every industry and state. That’s more than 10 times the number of students who graduated with computer science degrees last year, according to the nonprofit, which has been working tirelessly to establish and expand CS access in schools.

The tech industry grumbles about the shortage of qualified workers. Yet fewer than half of American K–12 schools offer computer science classes, according to That number goes up to around three-fourths of K–12 schools when you include CS exposure through after-school activities and clubs, according to a 2016 Google-Gallup report.

Newer data on high school Advanced Placement exams collected by College Board does show positive signs. The number of “AP Computer Science A” exam-takers more than doubled from 2012 to 2016, reaching more than 54,000 students. And another 40,000 students took the new “AP Computer Science Principles” exam introduced in 2017. But that’s still a small fraction of the 15.1 million students attending high school this year.

A vast majority of students, parents, and educators are interested in and highly value computer science, the Google-Gallup report found. But state policies and curricula are lagging. Ten states currently have K–12 computer science standards, and 10 others are working on them. Only 33 states count computer science courses toward high school graduation credit. Arkansas is the only state that has made K–12 CS education mandatory. Virginia and Rhode Island are on that path.

Getting to this point has taken three decades of tireless state-level advocacy from and the Association for Computing Machinery. And there’s still a long way to go. Many states treat CS as an elective rather than a core academic subject like math and chemistry. Plus, most state standards focus on basic computer skills, rather than understanding core computing concepts.

Ideal CS courses should teach computational thinking: logical thinking, abstraction, algorithmic expression, problem decomposition, stepwise fault isolation, and debugging. “Every 21st century citizen needs fluency in computational thinking,” says Ed Lazowska, a computer science professor at the University of Washington. “It is never too early to start learning this. Elementary school kids can and do learn it.”

In fact, it’s crucial to start early. Just like reading and math, starting in kindergarten and learning incrementally builds a foundation for logic, critical thinking, and creativity. These skills are difficult to catch-up on later. Plus, says’s Chief Academic Officer Pat Yongpradit, reaching students earlier helps get girls and minorities on board. “They build confidence and efficacy around their ability to do computer science,” he says. “You can avoid stereotypes or at least make them aware of them.”

Other countries are stepping up CS education efforts. Israel leads the world, according to a report by the Information Technology & Innovation Foundation (ITIF). Its curriculum emphasizes CS as a science instead of teaching only coding, and the nation has 16.2 times as many students on a per-capita basis as the U.S. taking rigorous high school computer science. The U.K. mandates CS for students aged 5 to 14, but is struggling to train all the teachers it needs. According to the ITIF, Australia, Finland, Denmark, and Singapore are reforming CS education by introducing it to primary school students, adding deep concepts to curricula, and training specialized teachers.

Addressing the teacher shortage should be the number one use for the new funds allocated by the Trump administration, says Mark Stehlik, a computer science professor at Carnegie Mellon University. A lack of qualified teachers is the biggest barrier to CS education in the U.S., he says, and he thinks the problem is going to get worse. An earlier generation of CS educators has started to retire, and he says younger CS graduates “aren’t going into education because they can make twice or more working in the software industry.”

One solution could be to expand the reach of each CS educator through online classes. But “online curricula aren’t going to save the day, especially for elementary and high school,” Stehlik says. “A motivated teacher who can inspire students and provide tailored feedback to them is the coin of the realm here.”

Where the money should not be spent? On hardware and equipment. Laptops, robots, and 3D printers are important, says’s Yongpradit, “but they don’t make a CS class. A trained teacher makes a CS class. So money should be focused on training teachers and offering robust curriculum.”

Corporations also need to take more seriously their responsibility to donate money and technology, and, perhaps more crucially, provide volunteers who can share their skills and knowledge with students. Some top tech firms like Microsoft already have programs that encourage employees to volunteer-teach high school courses, which typically means spending a couple hours a week delivering the class. But it would be more effective if those employees spent several days at the school, teaching students and also mentoring teachers.

This approach can be a win-win for everyone, Stehlik says: building community and social good in the short term, and in the long term, ensuring a workforce for the companies themselves.

Powered by WPeMatico

Clear As well as Unbiased Facts Regarding What Is A Drone? (Without All the Buzz).



A drone, likewise called an unmanned aerial lorry (UAV) as well as lots of other names, is a tool that will fly without the use of a pilot or any person on board. These ‘airplane’ could be regulated remotely making use of a remote gadget by a person standing on the ground or by using computer systems that are on-board. UAV’s at first were generally managed by someone on the ground yet as innovation has actually advanced, a growing number of airplane are being made with the purpose of being managed using on-board computer systems.

The concept of an unmanned airborne car can be mapped back to early in the twentieth century and were initially planned to be solely made use of for military goals yet have actually considering that located location in our daily lives. Reginald Denny, who was a preferred movie star as well as a devoted collection agency of version aircrafts was stated to create the very first remote piloted car in 1935. Given that this date, the airplane have been able to adjust to new modern technologies and can now be located with electronic cameras along with various other valuable extras. As a result of this, UAVs are used for policing, security work and monitoring as well as firefighting, they are also used by numerous business to look at hard to get to possessions such as piping as well as wirework including an extra layer of security as well as safety and security.

The increase in appeal of these gadgets has nevertheless, brought some downsides in addition to positives as brand-new rules and laws have had to be presented to manage the scenario. As the UAVs were getting more powerful as well as innovations were improving, it suggested that they could fly higher and also better far from the operator. This has actually led to some troubles with flight terminal disturbance throughout the globe. In 2014, South Africa announced that they had to tighten protection when it concerns prohibited flying in South African airspace. A year later on and also the US revealed that they were holding a meeting to discuss the needs of signing up an industrial drone.

In addition to the previously pointed out uses, drones are now likewise used for evaluating of plants, counting animals in a particular location, looking into a group amongst lots of others. Drones have actually taken care of to transform the manner in which numerous sectors are run as well as have additionally allowed numerous organisations to come to be much more effective. Drones have likewise helped to raise safety and security as well as contribute when it pertains to conserving lives. Forest fires as well as natural calamities can be monitored as well as the drone can be made use of to notify the pertinent authorities of anybody that is in difficulty as well as seeking assistance. The specific area of these occasions can likewise be located easily.

Drones have additionally end up being a hobby for lots of people around the globe. In the United States, entertainment use such a device is legal; nonetheless, the owner has to take some precautions when trying to fly. The aircraft needs to abide by particular standards that have been laid out; for instance, the tool could not be more than 55 pounds. The drone should additionally avoid being utilized in such a way that will certainly hinder airport procedures as well as if a drone is flown within 5 miles of a flight terminal, the airport terminals traffic control tower should be informeded beforehand.

How Experts Comb Satellite Images for Clues on North Korea’s Nuclear Tests

Satellite imagery analysts tracking North Korea’s nuclear program need more than just technical skills

Image: Planet
Satellite images of North Korean missile testing sites or nuclear reactor facilities can sometimes reveal smears of yellow or befuddling brown objects at certain times of the year. A casual armchair observer might suspect those patterns as indicating something nefarious. But they actually represent a mundane harvest practice—North Korean workers often dry out their corn harvest on pavement before putting the harvested crop into brown sacks.

Growing swarms of commercial satellites have provided new tools for both government spies and independent analysts to peek inside North Korea as the isolated “Hermit Kingdom” races to strengthen its arsenal of nuclear weapons. But the process of understanding satellite images of suspected North Korean missile or nuclear test sites is far from easy and requires more than just the latest satellite imaging technology. The best analysts need more than the skills to analyze near-infrared imagery or use mapping software—they must also tap into a wide swath of cultural and technical knowledge when trying to figure out what a particular satellite image can reveal about the secretive North Korean nuclear program.

“Satellite imagery interpretation is very interdisciplinary,” says Melissa Hanham, a senior research associate in the East Asia Nonproliferation Program at the Middlebury Institute of International Studies
(MIIS) at Monterey, Calif. “You have to be a little bit of an expert in everything, and you have to know your limits.”

Analysts have access to more satellite data than ever these days. Companies such as DigitalGlobe and Airbus have satellites capable of providing high-resolution images of ground objects as small as a third of a meter per pixel for commercial customers. Newer companies such as Planet have deployed constellations of dozens of smaller satellites with lower resolution capabilities around 3 meters per pixel. But the swarm of small satellites can capture more frequent images of locations in North Korea and elsewhere, which provides analysts a new tool for tracking a given site’s normal “pattern of life.”

Both increased access to daily satellite imagery and broad expertise on North Korea proved helpful in a 2016
Washington Post 
analysis by Hanham and her colleagues. Part of their analysis involved identifying corn piled along the access road leading to Panghyon Airbase in Kusong, North Korea, which represents an alternative missile test site for the regime. They were able to quickly rule out the idea that the brown sacks of corn might represent some mysterious equipment related to missile launches.

“While satellite imagery can give you the keys to the castle for a country that is closed off, there is also lots of room for error that context,
” says David Schmerler
, a research associate at MIIS.

The MIIS researchers used near-infrared satellite imagery to highlight some rather large burn scars left in the surrounding vegetation from two North Korean missile launch attempts that both went awry in October 2016. The sequence of before and after daily images was taken by a constellation of smaller satellites owned by Planet. (More recently, the publication 
38 North
 also made use of Planet’s daily satellite imagery to examine landslides resulting from North Korea’s sixth nuclear test.)

Technical and cultural knowledge go hand-in-hand, says
 Joseph Bermudez Jr., CEO and co-founder of KPA Associates, LLC and an analyst focused on North Korea’s defense and intelligence affairs and third-world ballistic missile development. 
He recalled being sent an image where someone thought they had spotted filled-in bomb craters left over from the Korean War. In fact, the circles represented traditional North Korean burial mounds where a circular area is cleared of grass for a mound placed in the middle

“There is an incredible range to what people are saying about satellite images of North Korea, and the vast majority of it is inaccurate,” Bermudez Jr. says. 
“The reason for that is satellite imagery interpretation is a skill that needs to be learned or taught—it needs experience behind it before you reach a level of accuracy that is acceptable.”

Bermudez Jr. usually begins his work by using color correction, sharpening the image and other processing tricks to make a satellite image look as good as possible for inspection. He also rotates the image to the “look angle” that represents the direction from which the original image was captured.

If he has a general idea of where in the image is the activity of most interest—such as a missile launchpad—he will zoom in and compare it with previous images showing the same site. But he also will have his computer software pan slowly across the entire image while zoomed in so that he can get a good look at everything in high detail.

Diverse life experiences and real-world skills among a team of analysts can also prove incredibly helpful. Bermudez Jr. learned about structures as a fireman paramedic earlier in his life. As a wilderness instructor, he also spent a lot of time walking around in the mountains and understanding natural drainage systems such as rivers and valleys. Other a
nalysts may know more about
building ships or railroad operations. Each analyst can then build “interpretation keys” based on his or her knowledge to share with others in examining satellite images.

Satellite imagery analysts also benefit from leaning on outside experts in other fields. Hanham has previously tapped into the expertise of a cousin who is a long-haul trucker and her microbiologist mom’s knowledge of lab equipment. “I’ve leveraged my family,” she says.

The community of satellite imagery analysts with deep expertise on North Korea is small. That limited amount of human expertise may prove a bottleneck at a time when commercial satellites continue to build ever-larger databases of imagery on North Korea. But increased use of image-recognition algorithms based on machine learning could help, Hanham says. She and her MIIS colleagues hope to eventually have more of a semi-automated process of identifying certain objects.

Commercial satellites may soon provide even broader space surveillance of North Korea with a wider array of imaging equipment. Some already make use of hyperspectral imaging capabilities that go beyond the visible light spectrum. Others have begun mounting synthetic aperture radar (SAR) that can provide 3D mapping of the Earth’s surface regardless of weather or nighttime conditions.

But regardless of the technologies, interpretation of satellite imagery still depends heavily on the human expertise factor. And experienced analysts are always careful to caution that satellite images represent just once piece of a much larger puzzle when it comes to gathering intelligence on North Korea.

“I’ve run into many cases where people who are imagery analysts believe they have absolute truth,” Bermudez Jr. says. “Satellite imagery is just a single point in time that can only tell you so much.”

Reporting for this story was supported by the Stanley Foundation.

Powered by WPeMatico

    Life 3.0 Artificial emotional intelligence

  • OTO - Senuke Inferno U/S
  • "Squeeze Beast" - Complete Package "Squeeze Beast" - 83.2% conversion rate squeeze page builder. Ramp up your subscribers and sales within minutes. Low technical skills required. You can set up a squeeze page easily, even though you haven't used a WordPress or HTML editor before!

Laser Weapons Not Yet Ready for Missile Defense

Prototype laser weapons can zap drones from the sky. But they won’t protect the U.S. from North Korean nuclear missiles

Photo: John F. Williams/U.S. Navy
Laser weapons are on a roll. The U.S. Air Force, Army, Navy, Marines, and the Joint Improvised-Threat Defeat Organization are testing them. Plans include mounting them on Humvees to shoot down drones. You can see them destroy drones on YouTube. The Missile Defense Agency wants to test laser-equipped drones as a defense against North Korean missiles.

But don’t expect a quick fix. Laser weapons have come a long way in the past decade, but they’re still years away from defending against threats ranging from North Korean long-range nuclear missiles to short-range explosive-laden drones launched by ISIS.

The Pentagon has worried about nuclear strikes by “rogue states” since the end of the Cold War. In 1996 the Air Force began work on the Airborne Laser, a plan to put a megawatt-class chemically powered laser in a Boeing 747 that could patrol near potential nuclear threats, which then included Iran and Iraq as well as North Korea. In case of a launch, the laser would fire to catch the rocket at its most vulnerable stage, as it was boosting out of the atmosphere. Two massive ground-based lasers had already demonstrated megawatt output, and the Airborne Laser used a more advanced chemical system that promised to make a better weapon system.  

The Airborne Laser finally shot down target missiles in 2010, years late and far above its original budget, but that was too little and too late to avoid cancellation. The laser hadn’t delivered enough power far enough to shoot down a missile at the desired range. Logistics experts also found the practice of shoehorning dangerous chemical fuels within a laser into an airplane to be an insoluble problem.

When asked why he cancelled the program, Secretary of Defense Robert Gates said he knew nobody in the Pentagon “who thinks that this program should, or would, ever be operationally deployed. The reality is that you would need a laser something like 20 to 30 times more powerful than the chemical laser in the plane right now to be able to get any distance from the launch site to fire.” After a final series of tests, the Airborne Laser was scrapped in 2014.

The new generation of laser weapons are electrically powered solid-state types which can run on power from diesel generators. The Missile Defense Agency is considering that technology for laser-armed drones to defend against North Korean missiles. These drones would be much smaller than a 747, and carry a payload of 5,700 kilograms at 63,000 feet, compared to a 747’s payload of over 200,000 kilograms at 40,000 feet. The beam should go further at the higher altitude, but the planned prototype wouldn’t be ready until 2023.

Ground-based solid-state lasers have scored a series of successes. Last week, a 30-kilowatt Lockheed Martin ground-based system called ATHENA shot down five drones at the White Sands Missile Range. Earlier this year, Lockheed completed a 60-kilowatt version of the laser for the Army Space and Missile Defense Command in Huntsville, Ala. to test in a military truck. The Navy has tested a 30-kilowatt laser on the USS Ponce and plans ship-based tests of a 60-kilowatt laser.

But those lasers are testbeds, not weapons ready for field use. After testing several lasers with other anti-drone weapons at White Sands, the Joint Improvised-Threat Defeat Organization summed up the results as: “Bottom line: Most technologies still immature.” They had hopes for improvement, but said “threat targets were very resilient against damage.”

Shooting down enemy drones, such as those used by ISIS, with laser-equipped drones requires identifying a target drone’s most vulnerable spots, says Philip Coyle, Senior Science Fellow at The Center for Arms Control and Non-Proliferation. “Just hitting the fuselage of the drone might not do much damage. Much of the laser energy would bounce off, and even if the laser was powerful enough to burn a hole, the drone might be able to continue flying.” The engine or spots on the wing or tail might be more vulnerable. But that would vary among drones, and the Army would have to figure out what drones ISIS uses and where they are vulnerable.

“Another problem is that these laser defense systems are expensive, and we can’t afford to sprinkle them over a large area,” adds Coyle. A laser’s lethal range depends on its power and the vulnerability of potential targets, but is likely to be limited to a few miles.

Powered by WPeMatico

    Life 3.0 Artificial emotional intelligence

  • Traffic Training HQ Membership Traffic Training HQ Membership
  • iPad App Cash Earn easy income in the very lucrative iPad apps market. Now you can cash in on the ipad app gold rush without needing to know anything about programming. Finally, an affordable step by step course for technical dummies revealing hot to create and sell hot

FirstNet Invites U.S. States to Sign Up for Public Safety Broadband Network

AT&T won a $7 billion contract to develop a broadband network that prioritizes traffic from first responders. Now, states must decide whether to opt in, or build their own service

Photo: FirstNet

This is a guest post. The views expressed in this article are solely those of the blogger and do not represent positions of IEEE Spectrum, or the IEEE.

Three monster hurricanes pummeled the United States and Puerto Rico in recent weeks. As soon as they hit land, public safety personnel were there to help those in harm’s way. To do that important work, first responders deserve a reliable wireless network of their very own.

A little over a decade ago, the public safety community began advocating for a wireless broadband network to be used exclusively by first responders. This was in the aftermath of emergency response teams in New York City struggling to communicate with one another while responding to a major terrorist attack on September 11, 2001.

The First Responder Network Authority (“FirstNet”) grew out of a 9/11 Commission recommendation calling for interoperable communications for all U.S. first responders. In 2012, Congress passed legislation allocating 20 megahertz of spectrum in Band 14 and $7 billion to create a nationwide network just for emergency responders.

Since the authority’s inception, FirstNet staff have met with more than 100,000 public safety stakeholders nationwide to develop customized strategies to ensure the network suits their needs. Our consultations with first responders has provided us with actionable information on the challenges they face. The difficulty is summed up succintly by Tom Sorely, Deputy CIO, City of Houston, Texas:

We compete with 70,000 football fans updating Facebook and Twitter to get our emergency messages through. It makes it very difficult to complete our mission.

We’ve gathered data from more than two million public safety personnel at 12,000 agencies nationwide. Each state and territory shared data with us from local agencies about patrol numbers, dispatch workload, special annual events that attract large crowds, seasonal operations, and any federal entities and tribal nations in the area.

We found extreme variations in public safety needs. For example, a law enforcement agency in New Jersey may support maritime operations, whereas a law enforcement agency in Wyoming needs to be able to operate deep in mountainous terrain.

FirstNet also learned that coverage and cell-site tower placement are two of the most important topics to first responders and agency heads. Dropped calls, or an inability to load Google Maps in the field, disrupts rescues and delays responders from providing services such as urgent medical care.

This year, FirstNet took several important steps toward completing America’s first nationwide wireless network dedicated to public safety. In March, we selected AT&T as the technology provider to build and operate the network. In June, we delivered customized plans to the states and territories. These plans outline the coverage, features, and mission-critical capabilities that FirstNet and AT&T will bring to each state, if they choose to sign up for it.

States and territories now have until late December to decide whether to allow AT&T to deploy the FirstNet radio access network (RAN) in their state or territory, or to build the FirstNet RAN on their own. In July, Virginia announced that it would become the first state to “opt-in” and go with the FirstNet/AT&T plan to build Virginia’s portion of the RAN. Since that decision, several more states have opted-in.

A governor’s decision to opt-in immediately grants public safety AT&T subscribers in that state access to prioritized traffic across AT&T’s existing network, with guaranteed quality of service not just on Band 14, but on all AT&T’s LTE licensed spectrum nationwide. AT&T will also offer pre-emption for public safety traffic by the end of 2017. These services are included at no additional cost to first responders that are AT&T subscribers in states that opt-in to FirstNet.

As stated in the Act that created FirstNet, governors can choose to “opt-out” and build their own RAN within a state or territory. Given the process for doing that, these states or territories may face multi-year delays in offering their services for public safety. Regardless of the decision, both opt-in and opt-out states will connect to the dedicated FirstNet core network, which is scheduled to be operational in early 2018.

The next step for this network is to deploy a dedicated public safety core architecture. This FirstNet Core will provide specialized public safety features not currently available on commercial networks, such as local control and encryption.

The FirstNet Innovation and Test Lab in Boulder, Colo., will build, test, and support network solutions that will improve communications and functionality between first responders. Another project supported by FirstNet is developing indoor location services, a key need for firefighters that will help them more quickly navigate a burning building.

To reach remote areas or disaster zones, AT&T will also provide states with access to 72 deployable base stations (known as “cells on wheels,” or COWs), along with more than 700 other pieces of equipment including “cells on light trucks” (COLTs), trailers, and generators.

Overall, AT&T will invest $40 billion in FirstNet over the life of the contract, and the network will also leverage the company’s existing network, valued at more than $180 billion.

As the weather events of this past summer have demonstrated, the need for a resilient, interoperable network with priority and pre-emption for first responders is as pressing today as it was 16 years ago. By the next hurricane season, public safety personnel should begin to reap the benefits of a dedicated public safety broadband network, a service they’ve needed for so long.

About the Author

Jeff Bratcher is the chief technology officer for FirstNet.

Powered by WPeMatico

    Life 3.0 Artificial emotional intelligence

  • Zen Titan 2 Start Profiting- Now! Limited discount. Join Now, 100% Risk-Free.
  • Covert VideoPress WordPress theme that makes your blog feel and work like Youtube, allowing your to cash in on the video craze without ever having to create a video yourself!

Review: 360fly 4K is a Simple, Rugged 360-Degree Camera for Your Next Adventure

A few interesting compromises make this unique 360-degree camera worth a look

Photo: Evan Ackerman
Most of the time, I don’t live the kind of life that lends itself to being recorded in immersive 360-degree video. I don’t fly fighter jets or do crazy adventure sports or really anything else that seems particularly worthy. But, IEEE Spectrum has been interested in experimenting with 360-degree video to see how it might help us bring you stories in a more creative way, so I brought one with me on a trip to the Galapagos Islands last month to see what I could learn.

There are a whole bunch of 360-degree cameras on the market right now, but I wanted something small and simple that produced high-quality visuals while also being rugged enough to handle sand and salt water. After doing a bunch of research, I went with the 360fly 4K, which was developed by scientists from Carnegie Mellon’s robotics lab and features a totally original design that manages 360-degree videos with just one lens.

360fly 4K

A quick word on the Galapagos, to help put these pictures and videos in context—the Galapagos are a group of volcanic islands about 1,000 kilometers off the coast of Ecuador, arguably most famous for helping inspire Darwin to come up with the concept of natural selection. The islands are so isolated that the animals living on and around them haven’t had a chance to develop a fear of humans, which in practice means that you can get extraordinarily close to a unique community of wild birds, reptiles, and sea mammals. You aren’t allowed explore the Galapagos on your own; the only way to do it is in small groups on designated trails with experienced local guides, and all of these videos were recorded under their supervision. No animals were harmed, although some were frustrated that the 360fly was not edible.

360fly 4K


Most 360-degree video cameras give you a spherical video: the camera uses two or more super wide lenses to see in every direction at once, and then stitches the video captured by each lens together to make a seamless video that records absolutely everything going on around you. Or, that’s the idea: in practice, this rarely works as well as it should, and you get bizarre seams in the middle of your videos where software is trying to combine recordings from separate lenses. 

360fly takes a different approach, using just one absurdly wide angle lens to record video that’s only 360 degrees in one plane. In other words, if you set the camera on the ground, it can see all the way around itself with no problems but not completely underneath. Technically, this results in a 240-degree field of view in the vertical dimension, but 360fly figures that most of the time, you won’t care about a blind spot under the camera. In exchange, you get a seamless 4K video (2880 pixels x 2880 lines) that doesn’t need to be stitched.

Design and Construction

360fly 4K

The 360fly 4K is a very distinctive matte black sphere 61 millimeters in diameter, covered in funky angle-y bits, and topped by an enormous lens. There’s just one button that we’ll talk about in a bit, but overall, it’s very clean and satisfying to hold. The battery and storage are all locked up tight inside the camera, meaning that you can’t swap either of them out. In fact, the camera must be placed on a dedicated magnetic dock to recharge or to connect directly to a computer. Annoying, yes, but there’s a good reason for it: this is one impressively rugged camera.

Straight out of the box, the 360fly is waterproof to about 9 meters (30 feet), which is plenty deep enough for mildly aggressive snorkeling (although not proper scuba diving). According to the specs, it’ll handle a 1.5-meter drop, and it shrugs off dust and sand with no problems. The lens looks like it might be vulnerable, but I didn’t have any issues over several weeks of testing. The camera spent most of its time in an external mesh pocket on my backpack, without any sort of protection on the lens, and it survived being pecked by birds, bitten by baby sea lions, and sat on by a giant tortoise, as you’ll see shortly. I’d wipe the lens off if it got dirty, and that’s it. 


360fly 4K

I love how simple the 360fly 4K is. There is one button that turns the camera on, starts a recording, stops a recording, and turns the camera off. That button has a glowy bit that surrounds it, providing all the basic information you need about what the camera is doing by glowing or blinking in different colors. Whenever you push the button, the camera gently vibrates to let you know that the push has been acknowledged, and different vibration patterns tell you without looking what the camera is doing. This is a brilliant feature that I was very grateful for, because it meant that I could confidently get the camera to do what I wanted it to do without having to focus on it—I could watch  
the animals instead.

With only one button, the number of things that you can do with the camera is limited. By itself, you can only record video, that’s it. To change camera settings (contrast, exposure, and more) or shoot in different modes, you’ll need to use the (free) app, which uses an initial Bluetooth connection to set up a dedicated Wi-Fi network to talk to the camera. This is how you get a video preview, as well, but personally, I almost never used the app. When I first got the camera, the preview was nice to get a sense of what the camera could and could not see, but otherwise, I felt like relying on the app too much defeated the purpose of a camera that you don’t have to aim and that’s way more rugged than my phone.  

Taking Video

One of the best things about this camera is how easy it is to shoot video. One long press of the button turns the camera on. It vibrates and the LED turns blue. One more press of the button starts recording. The camera vibrates again and the LED starts blinking red. Press the button again to stop recording, and give it a long press to turn the camera off.

The “front” of the camera, or what part of the image is at the center of the recording when you turn the camera on, is approximately where the on button is. While you can change where the center point is later in software, it would be more intuitive, I think, to have this reversed, such that when you turn the camera on (looking at the button as you do so), the opposite side is “forward.” This would also mean you could keep your eye on the color of the button while the camera is recording in case it does something wonky.

One of the advantages of 360-degree cameras is that you generally don’t have to aim them. Instead, you want to put them at the center of the action. In the Galapagos, this meant putting the camera down and standing back to let animals get close to it. Whenever it wasn’t possible to do this (like while snorkeling), I had to put a bit more care into pointing the camera, and since most of snorkeling is heavily subject-oriented, the 360-ness was less relevant.

360fly 4K

Generally, I kept the 360fly attached to a small Gorillapod through the screw mount on the bottom of the camera. This worked great, since the Gorillapod kept the overall system compact and flexible. For snorkeling, I attached the camera to a cheap extendable selfie stick using the action camera adapter, and that worked fine as well, although if anything, I wanted even more reach to get the 360fly as close to things as it really needed to be.

My biggest worry about the 360fly as opposed to other 360-degree cameras was that blind spot on the bottom of the camera, in the form of some irrational fear of missing an amazing thing happening in a place where the camera couldn’t see it. This was almost never a problem. I mostly had the camera on the ground, and who cares about seeing the ground? The rest of the time, a superficial awareness of where the camera was looking was more than enough.

The one time I felt like I could have really used a fully spherical 360-degree camera was while snorkeling, when a sea lion began curiously swimming circles around me. I suppose that if I’d been thinking, I would have held the 360fly up near my body and pointed it downward rather than trying to track the sea lion in POV mode, and maybe that would have worked better. But the vast majority of the time, the blind spot was a non-issue.

Software and Editing

While you can access all of the content on the 360fly through the app, I find doing anything on mobile endlessly annoying, so I relied primarily on the included desktop video editor, which unfortunately is the weakest point of the 360fly system. Normally, this wouldn’t bother me, since there are all kinds of other third party video editors for you to choose from, and this is a camera review, not a software review. But because of the peculiar nature of 360-degree video, it doesn’t seem to lend itself to being messed with unless you’re willing to jump through a bunch of hoops, so the (by far) easiest and safest thing to do is to just use whatever’s included with your camera.

Important features are mostly all there—you can combine clips and crop them, and add music either on top of or to replace the existing audio (very important, because the audio that the 360fly records natively is crap). Critically, you can also set where the center of the video is, effectively changing where the camera was pointing. You can even add in scripted motions to help the viewer follow the action. My second biggest complaint is that there seem to be no options for even basic transitions, which would significantly improve the end product.

The biggest problem, though, is that the software will not allow you to combine videos with different orientations. Generally, you’re either shooting with the camera facing up, or the camera facing forward, and the software stubbornly refuses to let you mash everything together into one video. I asked 360fly about this, and they said that it would be confusing for users to watch a video that shifts perspectives like that. I see their point, but I’d much rather have the software warn me about it and allow me to do it than prevent it completely.

Once the video is edited, you can easily attach it to a YouTube account, where it will upload with whatever metadata is required for YouTube to properly present the video in 360-degree format, nice and easy.

Watching Video

360-degree videos are best enjoyed with a headset of some sort. Personally, I don’t have anything fancy: my VR setup involves my (rather old now) Nexus 5x Android phone, and a $15 VR headset thing made of cardboard that I have to hold up to my face like a pair of binoculars. It’s a shame, because this in no way takes advantage of the 4K resolution of the camera, but the experience that it creates is totally decent and much better than just watching the video on the computer, even if the computer will show it in proper 4K. 

Using the headset, the motion sensors in your phone will pan and tilt the video in response to the movement of your head: if you turn your head to the left, the video shifts to the left, making the experience very immersive. This is especially true if there are things moving around the camera or multiple things going on at the same time that you can follow as if you were there in real life. Watching on a static computer screen, you can still see everything by clicking and dragging the video around, but it’s much less satisfying. 

Sample Videos

If you’re watching these sample videos on a computer (as opposed to through a headset), make sure to use the mouse to pan around. You can zoom in, as well. The first video combines footage taken on several different islands, and includes a snoozing marine iguana, a baby sea lion, and the muddy armpit of a Galapagos tortoise.

If you look carefully, you can see me off in the distance starting to panic as I watch the tortoise relax on top of the camera. I had not prepared for this possibility (seriously, what are the odds), and I have very little experience convincing highly protected and very large reptiles that they’d rather be lying on top of something less expensive. Fortunately, my guide grew up in a nearby village, and had experience with situations like these, since tortoises sometimes take naps in the middle of the road. Do NOT try this yourself, but if you gently tickle the bottom of a tortoise’s feet from behind, it will get annoyed at you, move, and you can recover your camera.

360fly 4K

Capturing myself in these videos was never my intent, but it’s turned out to be an unexpected bonus. I never thought that this would be one of the more valuable features of a 360-degree camera, but now that I’m back home, seeing myself in the video helps me remember what it was like to be there. 

While the camera did quite well in the Galapagos, I’m pretty sure that in general it’s a lousy way to record footage of wildlife. In order to get good results with a specific subject (like an animal), the subject needs to be either very large, or very close, or (ideally) both at the same time. Initially, I spent a while trying to get video of birds by holding the camera up close to them (within a few feet), but nothing turned out all that great—the field of view is simply so wide that everything looks small and far away unless it’s right up in your business. For sports-y, scenic-y stuff, this is fine, but if you’re trying to focus on a specific subject, it’s not ideal.

Is it better than a GoPro?

For most people, this is the big question: a GoPro is an affordable and predictable way to take video while you’re out adventuring. I brought along a GoPro on my Galapagos trip and used it about half the time. Is it really worth the extra expense and hassle to try and deal with 360-degree video?

Honestly, it’s a tough call. For most things, it might be hard to justify a 360-degree camera. However, there are some moments where the kind of immersion that a 360-degree camera like the 360fly 4K offers is simply magical, and you could never replace that with a more traditional camera like a GoPro. If you want to record video of a specific subject, like a person or an animal, and you can’t guarantee being able to get literally on top of or underneath that subject, a 360-degree camera may not give you the kind of footage you want. But is what you want to record happening all around you, as opposed to just in front of you? If so, a 360-degree camera could be worth it.

The 360fly tries to make this decision easy for you by also offering a “POV mode” that shoots a rather distorted 16:9 video with the camera facing forward in an effort to replicate the kind of thing that you’d get with a GoPro. Enabling this mode requires the app, though, so I didn’t end up using it all that much (favoring the easy and reliable single button on the camera). But if you’re comfortable frequently messing with your phone and don’t mind the fisheye, this could work for you. In that case, you get a 360-degree camera plus most of a GoPro, which seems ideal.

360fly 4K


I looked at a whole bunch of 360-degree cameras before deciding to review the 360fly 4K. Two things appealed to me: the single lens meant that no stitching was necessary, and the rugged, simple operation meant I could snorkel with it and not have to worry about a case. In practice, I found the compromise of not getting a totally spherical video to be well worth it, especially when it came to the seam-free image quality.

The 360fly 4K came out last year and it lists on the 360fly website for $500, although you can find it on Amazon for $350, or just $300 at REI. It would be harder to recommend at $500, but $300-ish seems reasonable, although the price drop does make us wonder whether 360fly might be releasing something new at CES next January. If they do, we’ll be the first to let you know, but either way, I have no trouble recommending the current 360fly 4K as an excellent way to get into 360-degree video.

Disclosure: 360fly kindly lent us a camera for the purposes of this review.

Powered by WPeMatico