Tuesday, June 28, 2005

From the Oval Office, With Love

Transcript of President Bush's speech

Tuesday, June 28, 2005

Thank you and good evening. I am pleased to visit Fort Bragg, home of the Airborne and Special Operations Forces. It is an honor to speak before you tonight.

I have a record in office. And all Americans have seen that record. September the 4th, 2001, I stood in the ruins of the Twin Towers. It's a day I will never forget.

Many terrorists who kill innocent men, women and children on the streets of Baghdad are followers of the same murderous ideology that took the lives of our citizens in New York, Washington and Pennsylvania. There is only one course of action against them: to defeat them abroad before they attack us at home.

I'm not the expert on how the Iraqi people think, because I live in America, where it's nice and safe and secure.

It's in our country's interests to find those who would do harm to us and get them out of harm's way.

Our enemies are innovative and resourceful, and so are we. They never stop thinking about new ways to harm our country and our people, and neither do we.

You see, not only did the attacks help accelerate a recession, the attacks reminded us that we are at war.

But Iraq has — have got people there that are willing to kill, and they're hard-nosed killers. And we will work with the Iraqis to secure their future.

I recognize that Americans want our troops to come home as quickly as possible. So do I.

It's a time of sorrow and sadness when we lose a loss of life.

Who could have possibly envisioned an erection — an election in Iraq at this point in history?

I'm honored to shake the hand of a brave Iraqi citizen who had his hand cut off by Saddam Hussein.

And the second way to defeat the terrorists is to spread freedom. You see, the best way to defeat a society that is -- doesn't have hope, a society where people become so angry they're willing to become suiciders, is to spread freedom, is to spread democracy.

Let me put it to you bluntly. In a changing world, we want more people to have control over your own life.

Thank you. And may God bless America.

* I might have mixed up some of tonight's speech with some other quotes by the President, but they are all actual quotes...

Sunday, June 26, 2005

Blind Spots

It struck me this week how little we humans differ from other animals on this planet.

We pride ourselves on our civilization, our technology, our ability to communicate - all apects we believe are unique to humans. We pride ourselves on our sentience (without really defining what this means, possibly to make it easier to deny it exists in other animals). And although there are examples of complex social structures, engineering, and communication in other species, it's easy to agree that the human versions of these are of an order of complexity higher than these other examples.

But our brains have blind spots. Concepts or ways of thinking about things that do not come naturally, and in such a way that it never occurs to us that other ways of thinking about a topic exists.

For those who are rigorous enough in their pursuit of knowledge, hints of these blindspots emerge. An obvious physical example is the blind spot in our vision caused by the place on the retina where the optic nerve has to pass through. Our brain "fills in" the missing information so that we don't even notice that we have a blind spot.

It seems that we have similar blind spots in many categories of information processing in the brain, some of them internal. All are a result of the vagaries of natural selection (ie, some because they offer a selective advantage and breed true, others as a random result of some other adaptation).

What's interesting is that we as a society (in the U.S. and other countries, anyway) have decided that some traits that are not species beneficial should still be tolerated, even at the expense of society. Medical procedures and devices are an obvious example.

Take the fact that I wear contacts as a case in point. In a time prior to the very recent, I would have been lion food long before the ripe old age of 40 (or more likely, killed by a competing tribe, or left behind by my own tribe as a burden they couldn't afford). But our society has not only determined that it is okay to have poor vision, but that it will expend resources to help offset this maladaptation.

We have determined that it is okay to carry these genes along to following generations. Although not beneficial to society, this is part of what it means to have civilization, part of what we say sets us apart from other species on Earth.

But we have also decided, perhaps not deliberately or consciously (but with the same end result) that other traits that are not species beneficial should not be condoned, and in fact should be left to natural selection to weed out in each generation.

An example of traits that are not tolerated well by society are those that cause an individual to be poor at earning enough for food and shelter. This could be extreme, such as cases of mental disorders like schizophrenia, or even mild mental retardation. This could be selfish, such as those who decide that begging or social welfare programs are an easier way of life than working for a living. This could be subtle and environmental, such as children in single parent households, where the choice or the parent is to starve or to abandon the children for long periods to work multiple low paying jobs to afford food and shelter.

Regardless of the reason, most societies on earth have made the effective decision that it is not the responsibility of that society to food and house these individuals. Our own society here in the U.S. struggles with this quite a bit, torn between an intellectual position that perhaps an enlightened society should let no one starve, and an inner voice that fights against this, saying that those people should find their own way to get food - "after all, I had to work hard to get what I have, right?"

I believe this inner voice has a perfectly valid basis in evolutionary biology. As I've discussed before in prior posts, the concept of a trait favoring "altruistic punishment" is one that appears to be a necessary trait for a stable society [subscription required for this link]. I won't revisit this concept in depth, but it is a trait that evolves to punish selfish behavior that may be detrimental to the group. Without a fair number of individuals feeling a sense of outrage and injustice at behaviors perceived to be taking advantage of the group's resources, it isn't possible to establish societal structures larger than a few tens of people.

I believe this is a large factor that creates the blind spot regarding how we treat individuals with respect to food and shelter. And man, being a rationalizing animal, creates all kinds of "logical" reasons why we should deny food and shelter to another individual, even if we can afford it.

We say that it is a moral value to encourage others to get a job, to take care of themselves, and that it is just laziness and moral turpitude that prevents these individuals from doing so.

Or we say that it's too bad that these individuals don't have the food and shelter they need, but that our society just can't afford to take care of the problem (while spending $300 billion on military offensives on the other side of the world).

We come up with multiple fine sounding reasons not to help these individuals, even if the evidence shows that the society may in fact be better off by providing it. (More on this later, but there is a tremendous body of evidence that extreme poverty of the sort we're talking about, where individuals must expend most of their energy on securing the basic animal needs, leads to a large amount of anti-social and pathological behavior, from crime to abuse that perpetuates through generations. And that the societal cost of this result is huge.)

But who is to say that this instinct is wrong? It got us this far, didn't it? It's probably fair to say that from a pure Darwinian winnowing of the species that society overall is better off if these individuals cannot perpetuate their genes (and maladaptive traits) to further generations.

But... if we're truly honest with ourselves, we have to admit that we as a society are pretty damn arbitrary about which maladaptive traits we agree to expend resources to compensate for, and which ones we rationalize not to. We pick and choose where we want evolutionary processes to continue, and where we want to step in to stop the selection process.

And I don't think we're making these decisions rationally. Or in many cases, even consciously.

Because we're blind to the fact that there is a choice to be made.

And we're blind to the fact that we have these blind spots.

Because we're only human, after all, which means that we're just another random evolutionary byproduct.

Unless we choose to be something more.

Wednesday, June 15, 2005

And Now For Something Completely Different

"Dying is easy. Comedy is hard." Oscar Wilde (a contemporary of Sir Monty Pyton of Castle Anthrax) first said this, but managed to make comedy easy and dying hard (penniless, alone in a hotel room, never having recovered from the cerebral meningitis contracted while in a Dickensian prison convicted on sodomy charges).

However, I think he accurately spoke for most of us. Comedy is tough. To write something funny, you have to be extremely clever with words. It also helps to actually be funny - funny is hard to fake.

Oh yeah - and if you want your post to be funny, never start it with a tragic story of death in the time of Dickens - it just doesn't get much less funny than that. (Unless you throw in a name like "Pip" or some tart wearing a 30 year old wedding dress to lighten the mood).

(Never mind, strike that - it's all damn bloody tragic no matter how you spin it).

Funny writing is the toughest. In a movie, you can avoid the need for clever writing by doing slapstick, or sometimes just looking funny. Or being funny looking.

It's tough to do slapstick in a written piece. For instance, just now I reached for my beer, almost knocked it off my desk, grabbed at it and it foamed over, so I tried to rapidly suck it down but then had beer came out of my nose and it sprayed all over the keyboard which sent sparks flying everywhere and caught my underpants on fire which I fortunately quickly dowsed with the rest of the beer.

And I'll bet you didn't catch any of that, or even laugh one little bit. And my balls are sitting in hot sticky beer for nothing.

You can't do funny looking in writing. I kood spel werdz so thay wur funny luking...but that just looks stupid, not funny.

I suppose I could attempt some cartoon ascii art, but that's not really writing, it's painting in another medium.

And even if you do manage to actually write something that you think is funny, many (most?) readers think otherwise. (Take yourself, as a case in point). Humor is like wine - there are many varieties of each, accompanied by their respective fans and hecklers, some are better fresh off the vine while others need some time to blossom, and both stain the carpet.

I'd advise against reading your latest witticism out loud to your cubicle neighbors. Most will respond with a look that will have you checking in a mirror to see if you've suddenly acquired a big zit that just popped or a hairlip or something. That kind of look that falls between feeling-sorry-for-you and horrified-but-for-some-reason-morbidly-fascinated- and-can't-look-away. Anyway, not a look that will make you feel all warm and fuzzy-bunny wanting-to-hold-hands-and-sing-kumbaya with the brotherhood of the cubicles - basically, not a good look.

I'd also advise never to ingest any mind altering substances prior to attempting humor writing. Oh, sure, the prose will come flying out of your fingertips, every line a pearl generating peals of laughter. But in the sober light of day, it will look like the secret ancient language of the Mayan Monkey Gods - ie, it'll be gibberish. (Or perhaps it truly is masterful craftsmanship of the English language that would have the Monkey Gods laughing till bananas were squirting out their butts, but all the philistine humans who read it will still think it's gibberish).

One last tidbit. Never attempt humor writing at 3am. This is essentially the same as the mind altering substance effect, and will cause you to write nonsensical, irrelevant crap that no one will want to read (or, having inadvertantly read it, will make them want to tear their eyes out their head and light themselves on fire).

Say Good Night Gracie.

Sunday, June 12, 2005

Of Two Minds

It never ceases to amaze me the ability of humans to hold two completely contradictory thoughts in their head at the same time, with no impulse at all to attempt to reconcile them.

It shouldn't. Amaze me, I mean. There's not a person I've met who's hasn't at some point demonstrated point positions of illogical inconsistency.

And I must include myself in that category as well. Although when such inconsistencies are pointed out to me by my very dear and blunt friends, I do make some attempt to modify them or rationalize them.

So you'd think even with my intimate familiarity with such a state of ignorant bliss, I would be not only unsurprised, but understanding of such a condition when I see it in someone else.

But I'm not. Why is that?

"We hate in others what we hate most in ourselves" (author unknown). I think I see, and despise ignorance of self in others because I have spent a fair amount of my life attempting to understand my own self, and have clearly made little success. This, coupled with my disgust with self-failure, combines to result in an ultimate spite for others who don't even try.

Or perhaps it's just disgust at the even shallower nature of hypocrisy. Someone who represents incompatible world views but is unaware of the fact is probably not a hypocrite. A hypocrite is aware that he says one thing publicly, but does another privately. Ignorance is at most sad. Hypocrisy is at most putridly disgusting, a scourge of the earth.

Some examples of grossly contradictory positions that I commonly see:

  • "Right-to-lifers" that are for the death penalty. Or even more perversely, anti-abortionists who actively promote the deaths of doctors who perform the procedure.

  • Lawmakers who make laws that are applicable to anyone other than themselves. This needn't be overt, like passing a pay cut for everyone but their own group. It could be passing a mandatory extension of military enlistment terms when they have no familiy member would could be affected. Or passing tax "cuts" that only apply to their economic group due to the nature of the item taxed, like cutting luxury taxes. Or "fiscal conservatives" who ask for appropriations "out-of-budget" that create some of the highest national debt in the history of man.

  • Church goers who follow few of the tenets of their professed religion (and even those that are followed are done in the absence of any real challenge to the belief). For example, of the Ten Commandments, at least four of them can be followed faithfully in a normal life without having to do much at all, as the occasion to break with them rarely occurs.

  • I shouldn't leave out "Liberal" hypocrites, although most extreme leftists I know are just impractical or foolish rather than hypocritical. But I regularly see those who press for more "societal good" laws like helmets and seat belts as often being the worst about letting their kids bounce around the back seat of the car without seat belts, or ride their bike without helmets.

I've come to believe that my internal disgust at hypocrisy and illogical thinking has its basis in evolutionary biology, rather than some carefully developed rational world view that values honest and rational decision making.

I believe I have a strong genetic disposition to being an "altruistic punisher." Altruistic Punishment is basically the punishment of a "cheater" even when there is no direct gain for the individual doing the punishing. (more here, and here)

A recent article in New Scientist discussed a series of experiments and models that explain how cooperation in large groups evolves. Cooperation in groups is a feature that allows a given group to outperform a group that doesn't cooperate as well (see some of my prior posts). The interesting thing highlighted in this article was the role of "altruistic punishment" in the development of cooperation.

...strong reciprocity is not simply a matter of cooperation; it also requires punishment of those who fail to toe the line. When the team added punishment to their models, they found it made a huge difference. In a second round of simulations, they included a new kind of individual: the "punishers". These punishers were not only willing to cooperate with others but also to punish cheats. By making cheats pay for their antisocial actions, they tipped the balance towards cooperation...cooperation can become the default behaviour in large groups provided punishers are willing to punish not only those who cheat, but also those who fail to punish cheats (see Graph). "In this case," Fehr says, "even groups of several hundred individuals can establish cooperation rates of between 70 and 80 per cent."

My disgust is hardwired - my genes predispose me to watch and scold. I was born to be a Punisher.

But "quis custodiet ipsos custodes?"

Quidquid latine dictum sit, altum videtur

Thursday, June 09, 2005

Time Again

Part 2 of a series on the nature of Time

As I said in a prior post, recent notoriety of one Peter Lynds and a paper he wrote in 2003 has caused me to revisit my assumptions about time.

While I still like the Universe As Simulation theory, it does have some weak points. (Do you know what they are?) Mr. Lynds "theory" (if writing a philosphy paper and publishing it in a physics journal makes a theory) is that there is no such thing as a point in time.

Now I agree that the concept of a point having physical reality is one I've struggled with as well. Even in my Universe as Simulation theory, the smallest unit of distance (or the smallest cell of the simulation) is a Planck Length. This would mean that, even though very small, there is still a "distance" covered by the smallest unit of space. I don't think there is any physical manifestation of this mathematical concept called a point. Neither does Mr. Lynds.

As a result, neither is there a physical manifestation of a point in time. According to Mr. Lynds, there is only Interval.
For example, if two separate events are measured to take place at either 1 hour or 10.00 seconds, these two values indicate the events occurred during the time intervals of 1 and 1.99999... hours and 10.00 and 10.0099999... seconds, respectively.
He goes on to say that no matter how small you make the interval, it can't be zero. So there is no static instant in time at which the position in space a body in relative motion can be precisely determined.

(In fact, this same concept applies to space. When you say an object is at a set of spatial coordinates, it is really "smeared" over an interval of coordinates determined by the size of the object. So even if time were quantized, it's still meaningless to say that an object is at a precise set of coordinates at a specific point in time.)

Mr. Lynds goes to say that it is this very fact, that there isn't a real physical "instant" in time that allows for movement in the first place. Something about if there really were static instants, then objects would be frozen in place in a given instant, with no way to "advance" or move to the next discontinuous instant. (Clearly he hasn't seen the Universe aas Simulation concept, where objects "move" from static instant to instant because they are just calculations of properties that say the position changes).

This indeterminacy of time and position means that either can only be measured as bounded by uncertainty within the limits of Heisenberg's uncertainty principle and quantum uncertainty. Physics experiments (so far) don't disagree with these limits on real world measurements, although whether the limits are due to this "interval" theory or some other fundamental physical principles (say derived from string theory) has not been shown.

Although I'm not sure I understand exactly what he is driving at, it seems that he is almost saying that really time, which can only be represented as intervals, is a derivative measurement of movement through space. It is only the movement through space that causes the illusion of time, and only the order of the positions of bodies in motion that provides the illusion that it flows in a particular direction.

As a result of there not being any reality behind the concept of a static point in time, so to do all the equations in physics which rely on a static point in time fail to represent reality. And this is all of them: velocity, momentum, acceleration, frequency, wavelength, rest mass, energy...they all need to be rejiggered to handle intervals rather than point values. (But does the concept of point work pretty well in given a reasonably approximate answer, accurate enough for most uses? You betcha.)

The sad news in all of this is that if true, there could never be any such thing as time travel (at least, not in the reverse "direction"). I guess this is why no time travellers showed up to the recent Time Traveller Convention.

But while it may be true that time can only exist in the reality of an interval, this doesn't eliminate the Universe as Simulation, which already had that concept baked in. And it does nothing to explain why we have fundamental limits like c (the speed of light in a vacuum), while this could be consistent with the "clock speed" of the universal computer.

Just because we don't like the Russian matryoshka doll implication of the Universe as Simulation doesn't mean that it isn't true. Just because we can't imagine what the "container" for the universal computer could be like (existing not in time or space), doesn't mean that it doesn't exist. I doubt any of my programs have a concept of the "real" world in which they run. There are just bits flipping according to some rules, each flip at a clock interval.

I'll try to wrap this up this weekend with my modified Universe as Simulation theories. (Ok - they're not theories unless I have math to go with them. Call them hypotheses).

Sunday, June 05, 2005

La, La, La...I can't hear you

The Dallas Morning News Sunday edition had a front page article Christians flocking to religious media [link requires registration].
When Family Net reported on the recent Miss Universe pageant, the Fort Worth-based Christian TV network edited out footage of the swimsuit competition.

When World magazine wrote about a church embroiled in controversy, the Christian publication noted that the "mainstream media had badly garbled the story."

And when the Christian Broadcasting Network covered founder Pat Robertson's trip to India, a reporter matter-of-factly described miracles that had been delivered.
The article goes on to describe how many Christians are turning to religious media for their news.

"Sacred media is more trustworthy than its secular couterparts...religious news outlets provide an alternative for those who reject mainstream media."

This trend isn't unique to Christians. Islamic extremists have for years been getting their unique perspective on world events from various middle eastern outlets. In the US, many on the right have been getting their news in recent years from "Fair and Balanced" Fox news (a counter, many of them say, to the inherent left bias of the other news outlets that has existed for years).

Increasingly, people are only listening to those with whom they agree, only getting their news from those outlets that reflect their existing world view, only hitting the web sites and RSS feeds of those who tell them what they want to hear.

Is there no source for unbiased reporting?

Admittedly, there probably never was. John Leo loves to make a career out of pointing out how the left leaning media slants and even ignores stories that don't agree with their preconceived views (although how such a self-editing left-leaning press would publish anything by John Leo is conveniently ignored).

But even if you don't buy into the "vast left-wing/right-wing/ secular/religious/ power-broker/World Government conspiracy to control the press," it is probably fair to say that with limited distribution channels for information, it was easier in the past to control the dissemination of information and bias it toward whatever viewpoint the gatekeepers chose. (Control of information dissemination is integral to every entity vying for power, from China to our own government and corporations. We just happen to have more rule of law that makes our government have to be much more clever about such control.)

The internet breaks down the barrier to information dissemination. It is likely that "The Truth Is Out There." If you can find it.

And there's the rub. Each individual must sift through the myriad of web information sources, trying to discern the trustworthy from the misleading and downright disinformation sites.

And so each picks their information source, be it television, newspaper, or web site, based upon a comparison of the information from that source and the information the individual already "knows" to be true.

Depending upon the critical thinking skills of the individual, some will make better choices than others. Most will subscribe to sources that feed information that fits neatly into a set of preconceived notions, because that is how they found their audience in the first place.

Everyone will be sure that anyone spouting information that differs in content from that received by their media outlet of choice is wrong - dead wrong. Not even worth listening to. "La, la, la...I can't heeeeaar youuuuuu..."

It is difficult to come to consensus or even compromise on different worldviews when the basis of facts can't even be agreed to. There is no common ground.

And that doesn't bode well for democratic processes for decision making.

We need a Truth Machine - something that can let everyone know when an individual at least believes they are telling the truth. Perceptions of events will still differ, but at least we can triangulate between truthful recitations of perceptions to come to some common understanding of the facts of an event.

Without a Truth Machine, we only have disinformation vs. spin, with the truth lost to a little unread site in cyberspace.

Wednesday, June 01, 2005


The question of the nature of time has always fascinated me. How can we know so little about a physical property that so underpins our lives?

Recent notoriety of one Peter Lynds and a paper he wrote in 2003 has caused me to revisit my assumptions about time.

First, my prior understanding of a working model for time. Then, an examination of Mr. Lynds position. Finally, a revised working model.

The Universe As Computer

I've long been of the belief that the universe is actually a large simulation (ala The Matrix, although this concept was around long before that movie - where do you think the writers got the idea?)

Computer simulations work the following way. The physical space is broken up into arbitrarily small "cells", each of which have some properties and values. Similarly, an arbitrarily small time slice is selected for the simulation. At each time slice, the values of each of the cell properties are calculated and set. Depending on the simulation, what is happening in one cell affects surrounding cells.

Depending on the number of cells and the complexity of the calculations, the "real" time it takes to perform one iteration (one "time slice") of the simulation can be much longer that the time interval modeled in the simulation. (In other words, it might take a minute to calculate the values of a billion cells for a 1 second interval, then another minute for the next second, etc.) This is known as the "time-ratio" of the simulation, and is usually represented as simulation-time/computational-time.

When the time-ratio is equal to one, then the simulation takes place at the same speed as "real-time". Games and virtual reality environments require a time-ratio of 1 or higher. Most physics simulations run at time-ratios significantly less that 1 due to the number of calculations involved vs. the computing power available.

Now, consider The Matrix scenario again, where one lives in a "virtual" environment, a simulation of the real world where the sensory inputs are calculated and fed to your brain, either via your senses or directly. Theoretically, if the simulation is detailed enough (ie, has a sufficiently large number of cells, a sufficiently small time interval, and a time-ratio of 1 or greater), then you shouldn't be able to tell the difference between the simulation and the real world. (This concept has been a problem debated by philosophers for years.)

The next step in understanding The Universe As Simulation is making the assumption that there is no "physical" brain at all, but only a simulation of a brain. In this picture, we have taken a real, physical brain and developed a model of all the neurons, synapses, neurotransmitters, etc at a sufficiently detailed level that the model behaves like a real, physical brain. Given the same inputs, the model produces the same (or within the bounds of non-linear systems, essentially the same) behaviors and outputs.

This is clearly beyond our technology today, but given the orders of magnitude leaps in processor size/performance/cost we've seen in the past 30 years, the extrapolation to the level of computing power necessary to produce a model of the brain in sufficient detail is absolutely within the grasp of technology within the next 30 years.

So, now we have a simulation that simulates the physical external world, and a simulation that simulates a brain. We can put these two together, and see what happens. (In fact, the separation of these simulations is an artificiality - if we can simulate the physical world to a sufficient fidelity, and the brain is just another physical construct in that world, then the simulation of the physical world can contain the simulation of the brain - or many brains, for that matter.)

An interesting observation is that the time-ratio of the simulation has no effect within the simulation itself. By this, I mean that even if it takes 1 minute in the "real" world to simulate 1 nanosecond of the simulated world, within the simulation the calculations (and therefore the subjective experience of the simulated brain) is the same as if it took 1 nanosecond in the real world to calculate 1 nanosecond in the simulation.

Think of simulating a bullet moving through the air. Each time slice, the calculations of the cells that represent the absolute position of physical objects would each increment. The cell that held the front-most atom of the metal of the bullet would, in the next time slice, hold the atom that would be in that position after the calculation interval, say the distance a bullet travels in a nanosecond. The cell next to this bullet containing cell that held part of an air molecule 1 nanosecond ago will now be calculated to hold the front-most atom of the bullet. Interval by interval, time-slice by time-slice, we simulate the motion of the bullet through space.

If we have a time-ratio of 1, then the bullet moves in 1 nanosecond the same distance it would in the "real" world in 1 nanosecond. If we have a much smaller time-ratio, then the bullet would move in the simulation that same nanosecond distance, but in the real world maybe a full second has gone by (because we had to calculate a whole heck of a lot of other cell values as well for that time-slice). In other words, we can adjust the time-ratio all we want, and the simulated bullet would never know the difference. The same would hold true of the "simulated" brain. Because to the simulated brain, time flows interval-by-interval, time-slice by time-slice, regardless of the true clock-speed of the processor performing the calculations.

All these concepts have been used in science fiction for a while. Imagine if we had the computing capacity to run at time-ratios greater than 1, ie faster than "real-time." We could collect inputs from the real world, pass them to the "simulated" brain, and it could react faster than the "real" brain in the "real" world. The simulated brain could think faster than the real brain. The simulated brain could spend the subjective equivalent of a day to think over something that perhaps the real brain only had minutes to ponder. I imagine the simulated brain would kick real brain's ass. (Man, talk about a mixed metaphor).

Okay, three more steps to get to the conclusion of this section.

You can see that to the simulation, time "feels" like it is flowing and is continuous, not broken up into chunks. (Compare this to the Beta Movement visual phenomenon - often mistakenly called "persistence of vision" - effect of your eyes and brain while watching a movie. The movie is really discrete chunks of static images, where each image is displayed for about 1/30th of a second and then the next image is displayed. To your brain, it looks like a continuous image - but it's most certainly not).

This demonstrates that it is very possible that even in the real world, time is broken into discrete chunks of duration that come one after another so fast that it seems continuous to us. The duration I subscribe to is known as a Planck Second, which is the time it takes the speed of light to traverse a Planck length, or about 10 -44 seconds. A Planck length is the theoretically smallest measureable distance. The speed of light in vacuum is the theoretically fastest velocity in the universe. The fastest velocity over the shortest measureable distance produces the Planck Second.

A simulation that ran using time-slices of Planck seconds and cell sizes of Planck length would be a simulation of the universe. And it wouldn't matter the "real" time it took to actually calculate the next value of each cell for each iteration, because within the simulation it isn't detectible. So if we are each simulated brains running in a simulated universe on a big universal computer, then we wouldn't be able to tell the difference.

Now, really smart guys like David Deutsch hate this argument because in one sense all it does it push off the really hard questions like the "true" nature of the universe one more level removed. In other words, if we really are a simulation, then what is the nature of the simulator? What are it's physical properties? How could we even begin to determine these? Deutsch likens this to the "God" argument, in that once you make this argument it really cuts off the possibility of further meaningful inquiry. (ie, if I say the bullet moves the way it does because God makes it move that way, then there isn't much more meaningful I can ask about how that works - it's probably even fruitless to try to make predictions based upon prior observation because God could just change his mind at any time.)

But just because the argument makes the "true" nature of reality harder to discern doesn't necessarily make it an incorrect argument. That too would just be wishful thinking. Science would say we need a way to test the difference between the two theories so we could determine which one is false. Unfortunately, no one has yet come up with a good experiment to falsify the Universal Computer theory. (The closest is to posit finding some phenomena that we can prove mathematically that we can *not* simulate it - the fact that it existed would also prove that we can't be living in a simulation. They're still looking...).

(As a side note, the Universal Computer theory solves some puzzles like Zeno's paradox. The dichotomy of Zeno's paradox basically goes that to get from point A to point B, I must first cross half the distance from A to B. Then from there, I must cross half the distance again. And so on. If space and time are continuous, then they are "infinitely" divisible, so in theory I must cross an infinite number of half-distances to get to point B. Yet we all know it doesn't take infinite time to move my finger from the space bar to the delete key, so something is wrong in the assumptions of this argument. In the Universal Computer, of course, everything is quantized into discrete space and time slices, and objects "jump" from cell to cell with each time slice - there is no infinite distance to cross.)

The next post will address the concepts introduced in Peter Lynd's paper, and perhaps a modified version of Universal Computer theory of time. (I'll also go back through this and set links to referenced information, but right now I'm out of time...)