Speaking of simulations, here’s Christopher Chabris and David Goodman on the role that computers have settled into in chess:

Before the Deep Blue match, top players were using databases of games to prepare for tournaments. Computers could display games at high speed while the players searched for the patterns and weaknesses of their opponents. The programs could spot blunders, but they didn’t understand chess well enough to offer much more than that.

Once laptops could routinely dispatch grandmasters, however, it became possible to integrate their analysis fully into other aspects of the game. Commentators at major tournaments now consult computers to check their judgment. Online, fans get excited when their own “engines” discover moves the players miss. And elite grandmasters use computers to test their opening plans and generate new ideas.

This wouldn’t be very interesting if computers, with their ability to calculate millions of moves per second, were just correcting human blunders. But they are doing much more than that. When engines suggest surprising moves, or arrangements of pieces that look “ugly” to human sensibilities, they are often seeing more deeply into the game than their users. They are not perfect; sometimes long-term strategy still eludes them. But players have learned from computers that some kinds of chess positions are playable, or even advantageous, even though they might violate general principles. Having seen how machines go about attacking and especially defending, humans have become emboldened to try the same ideas.

Since the computers have already mastered chess, we’re now the ones learning from them. And becoming more like them…

Zeeya Merali:

Seth Lloyd, a quantum-mechanical engineer at MIT, estimated the number of “computer operations” our universe has performed since the Big Bang — basically, every event that has ever happened. To repeat them, and generate a perfect facsimile of reality down to the last atom, would take more energy than the universe has. 

“The computer would have to be bigger than the universe, and time would tick more slowly in the program than in reality,” says Lloyd. “So why even bother building it?” 

But others soon realized that making an imperfect copy of the universe that’s just good enough to fool its inhabitants would take far less computational power. In such a makeshift cosmos, the fine details of the microscopic world and the farthest stars might only be filled in by the programmers on the rare occasions that people study them with scientific equipment. As soon as no one was looking, they’d simply vanish. 

In theory, we’d never detect these disappearing features, however, because each time the simulators noticed we were observing them again, they’d sketch them back in. 

That realization makes creating virtual universes eerily possible, even for us. Today’s supercomputers already crudely model the early universe, simulating how infant galaxies grew and changed. Given the rapid technological advances we’ve witnessed over past decades — your cell phone has more processing power than NASA’s computers had during the moon landings — it’s not a huge leap to imagine that such simulations will eventually encompass intelligent life.