Artificial intelligence

Artificial intelligence

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.[2]

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017, include successfully understanding human speech,[5] competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] However, in the early 21st century statistical approaches to machine learning became successful enough to eclipse all other tools, approaches, problems and schools of thought.[11]

The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[14] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[15] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[16] Some people also consider AI a danger to humanity if it progresses unabatedly.[17]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[18]

Astronomers detect 15 high-frequency ‘fast radio bursts’ from distant galaxy

Source: Astronomers detect 15 high-frequency ‘fast radio bursts’ from distant galaxy


Self-replicating Machine



Eleven Need-to-Know Facts About the Self-replicating Machine

Self-replicating machines | Isaac Arthur |
Shares 22

In an eye-opening video about the potential of self-replicating machines, Isaac Arthur describes what they are and how they will soon have a huge effect on our future. 

From it, we gleaned 11 need-to-know facts about self-replicating machines:

1. The Concept of the Self-replicating Machine Goes Back 400 Years

René Descartes |

It was Descartes who first described humans as machines in the 1650s. Sam Butler claimed that the body is a self replicating machine, and centuries later, Eric Drexel further defined and popularized this and other nanotechnology theories in his 1986 book, Engines of Creation. In the book, he describes the universalassembler, whic is a machine that is able to place atoms or molecules in specific places, thus being able to create any given object.

2. Technically, They Are Alive

What is life?

Life is usually defined as the ability to eat, grow, excrete, replicate, adapt and react to the environment.

At a minimum, self-replicating machines must be able to be able to take in and use matter to create a copy of itself and form a pattern, much like our DNA. They must be able to adapt to and interact with their environments. They do not need to be able to grow, or repair themselves per se, as long as they are able create copies of themselves before they deteriorate.

Most SRMs go beyond meeting the bare minimum requirements of qualifying as life.

3. Technically, We Use Them Right Now

A 3D printer that is able to print itself is a self-replicating machine. Though it is possible, self-replicating machines do not need to be able to produce their own building material. This is the same as in humans, who use individual life forms to keep ourselves alive such as bacteria and mitochondria.

4. Mutation is Not Likely

If they are so life-like, doesn’t this mean they will eventually mutate?

Self-replicating machines are only able to mutate by design. Even if mutation did occur, say due to an adaptation in regards to ingredients used to self-replicate, it would be extremely improbable that enough machines would mutate in the same way to create a problem for their programmed directive.

5. A Range of Sizes

The machines can be microscopic or large. Because of this, practical use of self-replicating machines will most likely exist off-planet or inside of human begins as their amazing ability to build and repair could lead to prolonged life.

It is possible that in humans, nano-robots will be able to repair tissue or failing organs without invasive surgery. They will also be able to monitor systems from within, repairing and rebuilding themselves as time goes on.

With SRMs being used for space travel, space probes will be able to repair themselves for thousands of years during exploration. This allows for humanity’s evolution into extrasolar travel.

6. Future Possibilities for Space Travel

There are countless different kinds of interstellar self-replicators that will be possible In the future. The most basic and all-encompassing is the Von Neumann probe, which is an interstellar probe that self-repairs and makes copies of itself periodically while exploring space. The idea of these probes stopping to repair periodically is a way to reduce the amount of time that is lost in the repair phase, and it is an important factor to consider with all of the interstellar space probe theories.

Stanley Kubrick’s 2001: A Space Odyssey |

A Bracewell probe is designed to communicate with other forms of life. In his video on self-replicating machines, Isaac Arthur gives the example of the black rectangular column in Stanley Kubrick’s 2001: A Space Oddysey.

The probe is designed to monitor life and then figure out how to communicate with it, which means that these probes have human level intelligence or greater. Although bracewll probes are not necessarily Von Neumann machines, it would make more sense for them to be that say so that they can unpack, and build upon arrival to a new planet.

7. Hints of Doom

There a few doomsday theories about what could go wrong when dispatching out these living robots into the solar system. But as long as we are aware of the possibilities, we should be able to avoid dire consequences.

A terraforming swarm is defined by sending probes out to begin inhabiting life-sustaining planets. After the probes find a planet that is suitable for human life, they begin to terraform, which has some moral and ethical constraints, especially if there is life already on the planet.

Berserker swarms happen when probes seek out new life and destroy it. And a graygoo swarm is the concept of self-replicating machines seeking out life and eating it. Both of these ideas for SRMs play into doomsday theories about all of the negative possibilities of using self-replicating machines for space exploration, but it is important to note that these SRMs would only come about from malicious intent.

10. Destroyers of the Universe?

Despite concerns about possible malevolent robots, it is impossible for robots to gray goo a planet and destroy it all at once. Self-replicating machines cannot realistically replicate faster than organisms of the same size. However, they can reproduce faster than biological life, they are constrained by the bottleneck effect that heat has on speed and production. Exponential growth has its limits.

Further, the more complicated the machine is, the slower reproduction will be. SRMs can be microscopic or larger than a person, and with added bulk and intelligence to the machine also comes the addition of the amount of time it takes to build itself.

Excuse me, sir.

11. Get Ready

Isaac Arthur argues that we will see self-replicating machines being used for space exploration and in medicine in our lifetime. The technology could totally transform the way that we go about our exploration of the universe and could be a cheaper solution and learning tool for the future. Self-replicating robots can be used in space mining, colonization, and manufacturing.

Edgy Labs Readers: What did we miss? What else should we know about SRMs?

Shares 22

    Life 3.0 Artificial emotional intelligence

  • Review Trust - Unlimited Site Plan Unlimited Everything! Collect and display reviews on Unlimited Sites with no restrictions. Includes Enterprise License!
  • Affiliate Social Pro Learn how to get absolutely free traffic with the power of social media!