When I was about five-years old, I learned a game called “War.” Two players are dealt 26 cards face down. Each then simultaneously shows the top card, and the player showing the higher value takes both exposed cards and places them at the bottom of his or her stack. If both cards are of equal value, there is a “war.” Each combatant places the next three cards face down, and the fourth face up. The card showing higher value captures all the cards played and puts them at the bottom of his or her stack. The war ends when one person has all 52 cards. I was very good at “War,” or so I thought. I hadn’t yet heard about “confirmation bias,” which was why I remembered my victories more than my defeats.
When I finally realized that the game was skill-free, I lost interest. Knowing that the outcome is completely determined once the deck is shuffled and dealt, I began to invent variations. For instance, I’d put all 4 aces (the highest value) in one stack and the remaining 48 cards in the other stack, and play. After playing five times, the stack that started with 4 aces won all but once. I concluded that it was better to start with the 4-ace stack. I hadn’t known at the time that I was applying a naïve version of a Monte Carlo method, a mathematical procedure in which a large number of repeated trials produce reasonably accurate predictions about the actual probability of the outcome of an event. (For instance, if you flipped a fair coin 1000 times, you would probably get approximately 500 heads, give or take 20.)
When I was about five-years old, I learned a game called “War.” Two players are dealt 26 cards face down. Each then simultaneously shows the top card, and the player showing the higher value takes both exposed cards and places them at the bottom of his or her stack. If both cards are of equal value, there is a “war.” Each combatant places the next three cards face down, and the fourth face up. The card showing higher value captures all the cards played and puts them at the bottom of his or her stack. The war ends when one person has all 52 cards. I was very good at “War,” or so I thought. I hadn’t yet heard about “confirmation bias,” which was why I remembered my victories more than my defeats.
When I finally realized that the game was skill-free, I lost interest. Knowing that the outcome is completely determined once the deck is shuffled and dealt, I began to invent variations. For instance, I’d put all 4 aces (the highest value) in one stack and the remaining 48 cards in the other stack, and play. After playing five times, the stack that started with 4 aces won all but once. I concluded that it was better to start with the 4-ace stack. I hadn’t known at the time that I was applying a naïve version of a Monte Carlo method, a mathematical procedure in which a large number of repeated trials produce reasonably accurate predictions about the actual probability of the outcome of an event. (For instance, if you flipped a fair coin 1000 times, you would probably get approximately 500 heads, give or take 20.)
The simplistic childhood card game of “War” has much in common with my subsequent career as a research mathematician. Mathematics is also a completely determined game that starts with a set of assumptions called axioms (the axioms for “War” are shuffle and deal) and follows rules from which the truth of mathematical statements (conjectures) can be discovered from the axioms. What makes mathematics more interesting than “War” is the ingenuity required to prove or disprove conjectures based on the axioms.
Mathematicians usually begin with axioms that seem “self-evident” because they are more likely to guide us to real-world truths, including scientific discoveries and accurate predictions of physical phenomena. Most ancient religions are also loosely based on an axiom or a set of axioms. The most common axiom is “God exists,” which is not as self-evident as it appeared to be in a pre-scientific world.
I ask you: Which abstraction is more practical for better understanding and solving real world problems—mathematics and its scientific implications, or God? Even most religious believers no longer attribute an eclipse to God’s wrath. We now know that a solar eclipse occurs when the moon passes between the earth and the sun, and a lunar eclipse occurs when the earth passes between the moon and the sun. Science tells us precisely when such eclipses will occur. Most theists also accept that earthquakes have more to do with plate tectonics than with God’s anger over specific human behaviors. A non-scientific “God axiom” might give comfort to believers, but it lacks any predictive value.
The mathematician Leopold Kronecker once said, “God created the integers, all else is the work of man.” This is a statement more about the axiomatic approach than about numbers or theology. It means that to build a system you have to start somewhere, to accept something. All mathematical systems begin with axioms (For instance, “Things equal to the same thing are equal to each other.”) A mathematician is interested in the conclusions that may be deduced from the axioms, regardless of whether the axioms are true. So a perfectly valid and logical proof may have nothing to do with reality if one of the axioms is false. Part of the beauty of mathematics is seeing the strange and mysterious places that apparently simple and innocuous axioms may lead.
For example, students learn Euclidean geometry in high school. At one point, mathematicians began looking at different geometries by changing one of the axioms for Euclidean geometry. A form of non-Euclidean geometry was developed where the angles in a triangle need not add up to 180 degrees as they do in Euclidean geometry.
Is axiom changing merely a useless game? Even if it is, mathematicians can justify it on aesthetic and artistic grounds when the subsequent reasoning is deep, innovative, and creative.
This particular story has a happy ending even for the most practical individual. Einstein developed his general theory of relativity by making use of the theoretical mathematics of non-Euclidean geometry, and applied it to what we now understand to be a non-Euclidean, four-dimensional universe consisting of three-dimensional space and one-dimensional time. Euclidean geometry, for most practical purposes, still works just fine here on planet Earth.
The axioms in monotheistic religions usually include these attributes for their God: omniscient, omnipotent, omnibenevolent, and infinite. However, religious apologists who want to avoid contradictory axioms have difficulty trying to justify an all-powerful, all-knowing, and all-loving God who allows so much needless suffering.
As a youngster with an Orthodox Jewish background and an interest in mathematics, the idea of an infinite God with infinite power who lived an infinite amount of time fascinated me. I felt that studying "infinity" would help me to understand God. Later I learned that infinity is a theoretical construct created by humans, and that the number “infinity” does not exist in reality. Infinity, like gods, is not sensible (known through the senses). Therefore, I reasoned, if humans created infinity, perhaps humans created God and gave him infinite attributes.
Infinity is a useful concept to help solve math problems, so perhaps an infinite God is a useful concept that helps some believers deal with human problems. Voltaire said, “If God did not exist, He would have to be invented.” Unfortunately, we have seen for centuries all the troubles in the world that were caused by people who believed in invented gods.
Mathematicians, unlike most theologians, recognize that our axioms are just made up, but believers assume their god is real and infinite because a finite god would be limited and could not accomplish all those things he allegedly does. However, we now know mathematically that there can’t be a largest infinity. There are many types of infinities, just as people believe in many types of gods. In fact, there are infinitely many infinities. So any infinite god could theoretically be replaced by a more powerful infinite god.
I’m happy that we can freely discuss views on infinity without meeting the same fate as Giordano Bruno in 1600. He taught that the universe was infinite with an infinite number of worlds like ours. It was considered heretical for finite man to discover the nature of the infinite, which was so clearly allied with the nature of God. Poor Bruno was burned at the stake, one of the last victims of the Inquisition.
Since the standard God axioms seem contradictory, is there a way to believe in a god with all his alleged attributes when there is no objective evidence that such a god could exist? Yes, but that requires an infinite amount of faith.