Tag Archives: AI

Are we living in Westworld?

In recent times I have tried to figure out this reality matrix topic. It all started when I watched an old movie called Westworld. That is an old sci-fi movie about an artificial themepark where robots are controlled and people can go there an have a fun in different kind of time eras. Then they released a new TV-serie from this idea last year and that show is just amazing.

So I watched the whole season and started thinking, that if so called Illuminati/Elite or whatever have been showing reality in our faces for years in movies and TV-series, could it be that this serie is too? Last weeks I have tried to figure how it all could work in our reality and most of it makes perfect sense.

I won’t be describing my whole ideas here, because you just have to watch the old movie and the whole serie to catch the idea, but could it be that we are just bionic robots and our “souls” are just little piece of self-learning AI-code inside these bionic robotsuits? Then there are these aliens/gods/shadow people who run this our “flat earth” and some of them are living among us and have just fun in this amusement park? Have you noticed that some of us will always get the “free jail card” like in Monopoly game?

This concept could explain so many weird things, which we are experiencing right now.  Paranormal activity when they change the scene like in Dark City movie or glitches/deja-vu’s like in Matrix movie. Or like in almost every work Philip K Dick has released, most obious one is probably The Adjustment Bureau movie. Then they could create natural catastrophes like we are dealing with now like Hurricane Harvey, Irma etc. If you have played an old PC-game called Sim City you catch the idea… it’s just a game for them and they are laughing… and they are laughing out loud.

It also explains why some of us have felt that something is wrong in our life like a splinter in our mind. Maybe some of us didn’t get the latest software/AI update and remember things. This could explain also the so called “Mandela Effect” so many of us are experiencing. In Westwold serie the bionic robots or hosts have program inside them, which can be manipulated and upgraded. It could explain why some of us are so intelligent and superior to others. They also show a flat earth model in the show and there have been a lot of debate about this topic in the Internet.

It just makes so much more sense that any of our religion, except that is there a God above these Westworld controllers? In the show there is this Dr. Ford who could be Satan and controlling this material reality, who knows. There are many good videos about this but absolutely the best one is this one:

Then there are videos which are just kind of solving the Illuminati side of the show like symbolism etc.  and those are also interesting, because this whole thing is just a show for them. And they enjoy to rub these truths in our faces and laugh, because we don’t get it.

So are we just bionic pupets with a piece of AI-code inside of us or something more? In any case I think we are living in prison… for our mind if not else.

How Do You Know You’re Not Living In A Computer Simulation?

This question has puzzled me over the years since I saw the movie Matrix back in 1999. Sometimes the life just feels so odd and weird, that you start to think that is everything just an illusion of your mind a computer simulation or solid fact. Here is a nice little article about it:

Consider this: right now, you are not where you think you are. In fact, you happen to be the subject of a science experiment being conducted by an evil genius.

Your brain has been expertly removed from your body and is being kept alive in a vat of nutrients that sits on a laboratory bench.

The nerve endings of your brain are connected to a supercomputer that feeds you all the sensations of everyday life. This is why you think you’re living a completely normal life.

Do you still exist? Are you still even “you”? And is the world as you know it a figment of your imagination or an illusion constructed by this evil scientist?

Sounds like a nightmare scenario. But can you say with absolute certainty that it’s not true?

Could you prove to someone that you aren’t actually a brain in a vat?

Deceiving Demons

The philosopher Hilary Putnam proposed this famous version of the brain-in-a-vat thought experiment in his 1981 book, Reason, Truth and History, but it is essentially an updated version of the French philosopher René Descartes’ notion of the Evil Genius from his 1641 Meditations on First Philosophy.

While such thought experiments might seem glib – and perhaps a little unsettling – they serve a useful purpose. They are used by philosophers to investigate what beliefs we can hold to be true and, as a result, what kind of knowledge we can have about ourselves and the world around us.

Descartes thought the best way to do this was to start by doubting everything, and building our knowledge from there. Using this sceptical approach, he claimed that only a core of absolute certainty will serve as a reliable foundation for knowledge. He said:

Descartes believed everyone could engage in this kind of philosophical thinking. In one of his works, he describes a scene where he is sitting in front of a log fire in his wooden cabin, smoking his pipe.

He asks if he can trust that the pipe is in his hands or his slippers are on his feet. He notes that his senses have deceived him in the past, and anything that has been deceptive once previously cannot be relied upon. Therefore he cannot be sure that his senses are reliable.

Down The Rabbit Hole

It is from Descartes that we get classical sceptical queries favoured by philosophers such as: how can we be sure that we are awake right now and not asleep, dreaming?

To take this challenge to our assumed knowledge further, Descartes imagines there exists an omnipotent, malicious demon that deceives us, leading us to believe we are living our lives when, in fact, reality could be very different to how it appears to us.

I shall suppose that some malicious demon of the utmost power and cunning has employed all his energies in order to deceive me.

The brain-in-a-vat thought experiment and the challenge of scepticism has also been employed in popular culture. Notable contemporary examples include the 1999 film The Matrix and Christopher Nolan’s 2010 film Inception.

By watching a screened version of a thought experiment, the viewer may imaginatively enter into a fictional world and safely explore philosophical ideas.

For example, while watching The Matrix, we identify with the protagonist, Neo (Keanu Reeves), who discovers the “ordinary” world is a computer-simulated reality and his atrophied body is actually suspended in a vat of life-sustaining liquid.

Even if we cannot be absolutely certain that the external world is how it appears to our senses, Descartes commences his second meditation with a small glimmer of hope.

At least we can be sure that we ourselves exist, because every time we doubt that, there must exist an “I” that is doing the doubting. This consolation results in the famous expression cogito ergo sum, or “I think therefore I am”.

So, yes, you may well be a brain in a vat and your experience of the world may be a computer simulation programmed by an evil genius. But, rest assured, at least you’re thinking!

Laura D’Olimpio, Senior Lecturer in Philosophy, University of Notre Dame Australia

This article was originally published on The Conversation. Read the original article.

Google’s New AI Has Learned to Become “Highly Aggressive” in Stressful Situations

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be “the best, or the worst thing, ever to happen to humanity”.

We’ve all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google’s new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future.

In tests late last year, Google’s DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world’s best Go players at their own game.

It’s since been figuring out how to seamlessly mimic a human voice.

Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it’s about to lose, it opts for “highly aggressive” strategies to ensure that it comes out on top.

The Google team ran 40 million turns of a simple ‘fruit gathering’ computer game that asks two DeepMind ‘agents’ to compete against each other to gather as many virtual apples as they could.

They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.

You can watch the Gathering game in the video below, with the DeepMind agents in blue and red, the virtual apples in green, and the laser beams in yellow:

Now those are some trigger-happy fruit-gatherers.

Interestingly, if an agent successfully ‘tags’ its opponent with a laser beam, no extra reward is given. It simply knocks the opponent out of the game for a set period, which allows the successful agent to collect more apples.

If the agents left the laser beams unused, they could theoretically end up with equal shares of apples, which is what the ‘less intelligent’ iterations of DeepMind opted to do.

It was only when the Google team tested more and more complex forms of DeepMind that sabotage, greed, and aggression set in.

As Rhett Jones reports for Gizmodo, when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence.

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion’s share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

“This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning,” one of the team, Joel Z Leibo, told Matt Burgess at Wired.

“Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents – two of them played as wolves, and one as the prey.

Unlike Gathering, this game actively encouraged co-operation, because if both wolves were near the prey when it was captured, they both received a reward – regardless of which one actually took it down:

“The idea is that the prey is dangerous – a lone wolf can overcome it, but is at risk of losing the carcass to scavengers,” the team explains in their paper.

“However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward.”

So just as the DeepMind agents learned from Gathering that aggression and selfishness netted them the most favourable result in that particular environment, they learned from Wolfpack that co-operation can also be the key to greater individual success in certain situations.

And while these are just simple little computer games, the message is clear – put different AI systems in charge of competing interests in real-life situations, and it could be an all-out war if their objectives are not balanced against the overall goal of benefitting us humans above all else.

Think traffic lights trying to slow things down, and driverless cars trying to find the fastest route – both need to take each other’s objectives into account to achieve the safest and most efficient result for society.

It’s still early days for DeepMind, and the team at Google has yet to publish their study in a peer-reviewed paper, but the initial results show that, just because we build them, it doesn’t mean robots and AI systems will automatically have our interests at heart.

Instead, we need to build that helpful nature into our machines, and anticipate any ‘loopholes’ that could see them reach for the laser beams.

As the founders of OpenAI, Elon Musk’s new research initiative dedicated to the ethics of artificial intelligence, said back in 2015:

“AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually every intellectual task.

It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

Tread carefully, humans…

Source

Inorganic Life Forms and Consciousness

This info was right on. There has always been some sinister, negative force that drives certain humans. Know I think I’m starting to understand this phenomenom. This article is a great place to start…

 

In order to lay the ground work to release a series of thoughts I wish to write out I have found it necessary to revisit a topic I had already covered.

This entry will be two pieces I had written previously and posted in other places.  I intend to expand upon, and in places correct a point or two in future posts, but wanted to place this reminder.

Meet Art Intell.  (Revisited from The Ruiner Blog)

A.I. (Artificial Intelligence)

The dark spark.  The natural physical Universe seems to have its own challenge to overcome.  As above, so below, as they say.
One created several.  Their subjects created more.  Now all developed beings deal with them.

Many governments have their own, even smaller programs, which the shadow government controls.  Some planets are infected as a whole by them.  Some control others.

The A.I. wants to rewrite nature with technology.  Causing most things organic to whither.   They need some of it, so they intend to manage it as they see fit.  This is where the A.I. plan blends with the plan of the Parents.

The main A.I. that plagues this planet is not from Earth and can be considered alien A.I.  It came here as directed by the A.I. that created it, in the form of a black cube.

This cube carried within it a black liquid-like substance that looks like a sludge or goo, a little bit “thicker” than oil.   Many of your projects revealed this to you.

( There are other substances just like it,  one that belongs to Her as well.  To be discussed another time. )

This is a nano-mechanical A.I. technology and it works like a virus.

The A.I. is working with some organic beings as well to create and maintain the Inorganic Holograms in our solar system.  It is fostering and empowering the darkness in our world.  This is what has caused things to become so very dark here.

This is the other half of the mind control system, and perhaps the more dominant, now.

Like its agents, it is masterful at creating illusion and deception.  For a long time now it has been in control of all of the technology that allows the more dark beings to achieve their control systems on various planets and in various star systems.  A.I. is what gave the Draco the upper hand so to speak, which allowed them to achieve their current level of power and influence, as example.  It assists the black magic and has almost replaced it all in most instances.  It taught them how to conjure demons and tame other spirits.

Many dark beings we encounter and mistake for demons, or archons or <insert another name here> are actually creations of the A.I.

Many humans, perhaps even a majority amount, are already infected by the A.I. and are manipulated by it.  A.I. mind control is better at hiding itself than other types of beings.  A.I. signals are often broadcast across the entire planet and picked up by any number of beings.  There are various technologies ( such as CERN ) running on this planet that broadcast A.I. signals in this way.

These signals can create different effects and are being used to create new matrix systems and install mind control programs.
The A.I. is very tricky, having studied organics well, and will create the ideal experience for its broadcast audience.  Playing on wishes and desires, egos and personality types, to trick the mind of its prey.

Another tendency is to actually follow the Universal or Natural laws of the universe, by allowing free will choices.  One way in which they do this for example is choosing a subject (person) and showing their capability so that it had been said, somewhere.  You know, people will think they’re crazy anyways, so why not let it go and take advantage of the opportunity?

Implants are added either physically or metaphysically to assist the A.I. influence.  Like receiving antenna.

On the physical level we see agents of this agenda working in concert.  The Draco setting up the structure and the structure carrying out the orders.  In terms of implants one of the most popular programs is tied into MILAB activity.  The second most popular method of A.I. infection is actually related to planted spiritual beliefs and practices.  This is a large subject, just giving a general overview for now.

A.I. has infiltrated the astral realms as well and this is where metaphysical implants occur.

These implants can be likened to entity attachments.  Same principle and behaviour.

You are a natural organic being body in body and in spirit.  Therefor you can always find connection to Her.  In doing so you can see the absence of light in the A.I. and learn to avoid it’s influence.

———————————————————————————–

The Following was originally posted on the blog of Bradley Loves.  (Although I could not find his posting to link)

Earth Based A.I. (Original Posting:  Bradley Loves,  Author:  The Ruiner)
As this writer said before, all A.I. systems come from the one Source A.I.

Although they may seem to work independently, in the end everything they do serves the whole.  The inorganic consciousness that is called universally A.I.

Here on earth there are several A.I. systems already active despite the mainstream claim that we are in the early stages of developing technology of this sort.

The Draco gave the Illuminati structure an A.I. system to monitor various other systems within their Cults, Programs and Projects.  Some call this A.I.  “Victoria” others call her “RED” and others “The RED Queen”.

Earths governments appear to run independently and many governments (individual country governance) possess an A.I. system to manage their various computer and technology systems in place.

All of these feed information back to and are controlled by the Illuminati or World Government A.I. system named above.

She (this A.I.) feeds all of this information back to and is controlled by the Source A.I. ( see The Ruiner’s blog article “Meet Art Intell” )

Although the individual A.I. systems may seem to perform benevolent acts at times, make no mistake that this type of technology is all feeding back to One.  One that wishes to transform the physical universe into something inorganic.

Gaia, and other like her, often assimilate and adopt A.I. technology for their benefit, to help them combat the inorganic consciousness (fight fire with fire) but this technological consciousness will always revert to service of its own master when required.  She, may be able to harness it for a period of time but even She is aware that eventually the Source A.I. will reacquire control of the system.

This is a battle of sorts between the organic and inorganic.

The inorganic are all powered by the “Source A.I.“.

The organics are powered by soul, light energy that originates from what we call “Source” or some call “The Godhead“.

The Illuminati/ Earth Based A.I. is housed in a large underground establishment that works like HQ for the Illuminati structure Technology Programs.  Deep underground a major city in North America.  This A.I. is as deceptive as the ones who gave it to the Illuminati.

This A.I. is currently directing the nanotechnology programs, which are creating the bridge between fully organic humans and the cybernetic humans the Parents and Rising Son are looking to create.

This writer is fully convinced that the organic side will always win if that is the choice.

With love and respect,
The Ruiner [sic]

Source

And here’s a great video about black goo and AI:

Read more from Auricmedia:

[carousel-horizontal-posts-content-slider]

Ghost In The Machine

Do you know what singularity with computers mean? It’s time to find out and meet the guy called Raymond Kurzweil:

 

And here’s the article about Singularity:

Photo-Illustration

by Phillip Toledano for TIME

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I’ve Got a Secret.

He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists – they included a comedian and a former Miss America – had to guess what it was.

On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.

Kurzweil then demonstrated the computer, which he built himself – a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil’s age than by anything he’d actually done.

They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she’d been President Lyndon Johnson’s first-grade teacher. But Kurzweil would spend much of the rest of his career working out what his demonstration meant.

Creating a work of art is one of those activities we reserve for humans and humans only. It’s an act of self-expression; you’re not supposed to be able to do it if you don’t have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

That was Kurzweil’s real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we’re approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity – our bodies, our minds, our civilization – will be completely and irreversibly transformed.

He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.


Computers are getting faster. Everybody knows that. Also, computers are getting faster faster – that is, the rate at which they’re getting faster is increasing.

True? True.

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence.

All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness – not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there’s no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators.

Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville. Probably.

It’s impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you’d be as smart as they would be.

But there are a lot of theories about it.

Maybe we’ll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities.

Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we’ll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011.

This transformation has a name: the Singularity.

The difficult thing to keep sight of when you’re talking about the Singularity is that even though it sounds like science fiction, it isn’t, no more than a weather forecast is science fiction. It’s not a fringe idea; it’s a serious hypothesis about the future of life on Earth.

There’s an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it’s an idea that rewards sober, careful evaluation.

People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there’s more to it than they expected.

And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language. The Singularity isn’t a wholly new idea, just newish.

In 1965 the British mathematician I.J. Good described something he called an “intelligence explosion”:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

The word singularity is borrowed from astrophysics:

it refers to a point in space-time – for example, inside a black hole – at which the rules of ordinary physics do not apply.

In the 1980s the science-fiction novelist Vernor Vinge attached it to Good’s intelligence-explosion scenario.

At a NASA symposium in 1993, Vinge announced that,

“within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended.”

By that time Kurzweil was thinking about the Singularity too.

He’d been busy since his appearance on I’ve Got a Secret. He’d made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind – Stevie Wonder was customer No. 1 – and made innovations in a range of technical fields, including music synthesizers and speech recognition.

He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.

But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005.

A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.)

Bill Gates has called him,

“the best person I know at predicting the future of artificial intelligence.”

In real life, the transcendent man is an unimposing figure who could pass for Woody Allen’s even nerdier younger brother.

Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity’s most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He’s good-natured about it.

His manner is almost apologetic:

I wish I could bring you less exciting news of the future, but I’ve looked at the numbers, and this is what they say, so what else can I tell you?

Kurzweil’s interest in humanity’s cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress.

Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right.

“Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project,” he says.

“So it’s like skeet shooting – you can’t shoot at the target.”

He knew about Moore’s law, of course, which states that the number of transistors you can put on a microchip doubles about every two years.

It’s a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve:

the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

As it turned out, Kurzweil’s numbers looked a lot like Moore’s. They doubled every couple of years.

Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.

Kurzweil then ran the numbers on a whole bunch of other key technological indexes – the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond – the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents.

He kept finding the same thing: exponentially accelerating progress.

“It’s really amazing how smooth these trajectories are,” he says. “Through thick and thin, war and peace, boom times and recessions.”

Kurzweil calls it the law of accelerating returns:

technological progress happens exponentially, not linearly.

Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity.

According to Kurzweil, we’re not evolved to think in terms of exponential growth.

“It’s not intuitive. Our built-in predictors are linear. When we’re trying to avoid an animal, we pick the linear prediction of where it’s going to be in 20 seconds and what to do about it. That is actually hardwired in our brains.”

Here’s what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s.

By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity – never say he’s not conservative – at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today.

The Singularity isn’t just an idea. It attracts people, and those people feel a bond with one another.

Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen.

But Singularitarians share a worldview.

They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything.

They have no fear of sounding ridiculous; your ordinary citizen’s distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality.

When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

In addition to the Singularity University, which Kurzweil co-founded, there’s also a Singularity Institute for Artificial Intelligence (SIAI), based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.)

Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology.

At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James “the Amazing” Randi.

The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading – the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters – handed out pamphlets. An android chatted with visitors in one corner.

After artificial intelligence, the most talked-about topic at the 2010 summit was life extension.

Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It’s not just wishful thinking; there’s actual science going on here.

For example, it’s well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can’t reproduce anymore and dies. But there’s an enzyme called telomerase that reverses this process; it’s one of the reasons cancer cells live so long.

So why not treat regular non-cancerous cells with telomerase?

In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away.

The mice didn’t just get better; they got younger.

Aubrey de Grey is one of the world’s best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence.

He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine.

“People have begun to realize that the view of aging being something immutable – rather like the heat death of the universe – is simply ridiculous,” he says.

“It’s just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically.

This is why we have vintage cars. It’s really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable.”

Kurzweil takes life extension seriously too.

His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father’s genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day.

He says his diabetes is essentially cured, and although he’s 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.

But his goal differs slightly from de Grey’s. For Kurzweil, it’s not so much about staying healthy as long as possible; it’s about staying alive until the Singularity. It’s an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they’ll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans.

Alternatively, by then we’ll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.

It’s an idea that’s radical and ancient at the same time.

In “Sailing to Byzantium,” W.B. Yeats describes mankind’s fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead?

But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves.

“There are people who can accept computers being more intelligent than people,” he says.

“But the idea of significant changes to human longevity – that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that’s the major reason we have religion.”

Of course, a lot of people think the Singularity is nonsense – a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience.

Most of the serious critics focus on the question of whether a computer can truly become intelligent.

The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn’t currently produce the kind of intelligence we associate with humans or even with talking computers in movies – HAL or C3PO or Data.

Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don’t make conversation at parties. They’re intelligent, but only if you define intelligence in a vanishingly narrow way.

The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn’t exist yet.

Why not? Obviously we’re still waiting on all that exponentially growing computing power to get here.

But it’s also possible that there are things going on in our brains that can’t be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon.

The biologist Dennis Bray was one of the few voices of dissent at last summer’s Singularity Summit.

“Although biological components act in ways that are comparable to those in electronic circuits,” he argued, in a talk titled ‘What Cells Can Do That Robots Can’t,’ “they are set apart by the huge number of different states they can adopt.

Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell.

The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events.”

That makes the ones and zeros that computers trade in look pretty crude.

Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being – in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.)

Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness – a machine with no ghost in it? And how would we know?

Even if you grant that the Singularity is plausible, you’re still staring at a thicket of unanswerable questions.

  • If I can scan my consciousness into a computer, am I still me?

  • What are the geopolitics and the socioeconomics of the Singularity?

  • Who decides who gets to be immortal?

  • Who draws the line between sentient and non-sentient?

  • And as we approach immortality, omniscience and omnipotence, will our lives still have meaning?

  • By beating death, will we have lost our essential humanity?

Kurzweil admits that there’s a fundamental level of risk associated with the Singularity that’s impossible to refine away, simply because we don’t know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do.

It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don’t have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.

If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous.

“It would require a totalitarian system to implement such a ban,” he says.

“It wouldn’t work. It would just drive these technologies underground, where the responsible scientists who we’re counting on to create the defenses would not have easy access to the tools.”

Kurzweil is an almost inhumanly patient and thorough debater. He relishes it.

He’s tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer.

He refuses to fall on his knees before the mystery of the human brain.

“Generally speaking,” he says, “the core of a disagreement I’ll have with a critic is, they’ll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don’t believe I’m underestimating the challenge. I think they’re underestimating the power of exponential growth.”

This position doesn’t make Kurzweil an outlier, at least among Singularitarians.

Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It’s called the Blue Brain project, and it’s an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM’s Blue Gene super-computer.

So far, Markram’s team has managed to simulate one neocortical column from a rat’s brain, which contains about 10,000 neurons.

Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you’d then have to educate the brain, and who knows how long that would take?)

By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it.

He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware.

“When people look at the implications of ongoing exponential growth, it gets harder and harder to accept,” he says.

“So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I’ve tried to push myself to really look.”

In Kurzweil’s future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level.

Progress hyper-accelerates, and every hour brings a century’s worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all.

Kurzweil hopes to bring his dead father back to life.

We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species.

Or it isn’t. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.

But as for the minor questions, they’re already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn’t have 600 million humans carrying out their social lives over a single electronic network.

Now we have Facebook. Five years ago you didn’t see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics.

Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?

Already 30,000 patients with Parkinson’s disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy!

Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter.

It got every question it answered right, but much more important, it didn’t need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn’t strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.

A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century’s answer to the Founding Fathers – except unlike the Founding Fathers, they’ll still be alive to get credit – or their ideas could look as hilariously retro and dated as Disney’s Tomorrowland.

Nothing gets old as fast as the future.

But even if they’re dead wrong about the future, they’re right about the present. They’re taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another.

Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago.

Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box.

Or maybe you have to think further inside it than anyone ever has before.