Quantum Cosmology, M-theory and the Anthropic Principle edit

 
Orbits for the action of the Lorentz group on momentum space in the construction of the bicrossproduct model in units of \lambda^{-1}. Mass-shell hyperboloids are `squashed' into a cylinder.

This talk will be based on work with Neil Turok and Harvey Reall. I will describe what I see as the framework for quantum cosmology, on the basis of M theory. I shall adopt the no boundary proposal, and shall argue that the Anthropic Principle is essential, if one is to pick out a solution to represent our universe, from the whole zoo of solutions allowed by M theory.

Cosmology used to be regarded as a pseudo science, an area where wild speculation, was unconstrained by any reliable observations. We now have lots and lots of observational data, and a generally agreed picture of how the universe is evolving. But cosmology is still not a proper science, in the sense that as usually practiced, it has no predictive power. Our observations tell us the present state of the universe, and we can run the equations backward, to calculate what the universe was like at earlier times. But all that tells us is that the universe is as it is now, because it was as it was then. To go further, and be a real science, cosmology would have to predict how the universe should be. We could then test its predictions against observation, like i n any other science. The task of making predictions in cosmology is made more difficult by the singularity theorems, that Roger Penrose and I proved. These showed that if General Relativity were correct, the universe would have begun with a singularity. Of course, we would expect classical General Relativity to break down near a singularity, when quantum gravitational effects have to be taken into account. So what the singularity theorems are really telling us, is that the universe had a quantum origin, and that we need a theory of quantum cosmology, if we are to predict the present state of the universe.


A theory of quantum cosmology has three aspects. The first, is the local theory that the fields in space-time obey. The second, is the boundary conditions for the fields. And I shall argue that the anthropic principle, is an essential third element. As far as the local theory is concerned, the best, and indeed the only consistent way we know, to describe gravitational forces, is curved space-time. And the theory has to incorporate super symmetry, because otherwise the uncancelled vacuum energies of all the modes would curl space-time into a tiny ball. These two requirements, seemed to point to supergravity theories, at least until 1985. But then the fashion changed suddenly. People declared that supergravity was only a low energy effective theory, because the higher loops probably diverged, though no on e was brave, or fool hardy enough to calculate an eight-loop diagram. Instead, the fundamental theory was claimed to be super strings, which were thought to be finite to all loops. But it was discovered that strings were just one member, of a wider class of extended objects, called p-branes. It seems natural to adopt the principle of p-brane democracy. All p-branes are created equal. Yet for greater than one, the quantum theory of p-branes, diverges for higher loops.

 
A schematic illustration of the relationship between M-theory, the five superstring theories, and eleven-dimensional supergravity. The shaded region represents a family of different physical scenarios that are possible in M-theory, and these last six theories arise as special limiting cases.


I think we should interpret these loop divergences, not as a break down of the supergravity theories, but as a break down of naive perturbation theory. In gauge theories, we know that perturbation theory breaks down at strong coupling. In quantum gravity, the role of the gauge coupling, is played by the energy of a particle. In a quantum loop one integrates over… So one would expect perturbation theory, to break down.

In gauge theories, one can often use duality, to relate a strongly coupled theory, where perturbation theory is bad, to a weakly coupled one, in which it is good. The situation seems to be similar in gravity, with the relation between ultra violet and infra red cut-offs, in the anti de Sitter, conformal field theory, correspondence. I shall therefore not worry about the higher loop divergences, and use eleven-dimensional supergravity, as the local description of the universe. This also goes under the name of M theory, for those that rubbished supergravity in the 80s, and don't want to admit it was basically correct. In fact, as I shall show, it seems the origin of the universe, is in a regime in which first order perturbation theory, is a good approximation.

The second pillar of quantum cosmology, are boundary conditions for the local theory. There are three candidates, the pre big bang scenario, the tunneling hypothesis, and the no boundary proposal.

 
The six-dimensional (2,0)-theory has been used to understand results from the mathematical theory of knots..


The pre big bang scenario claims that the boundary condition, is some vacuum state in the infinite past. But if this vacuum state develops into the universe we have now, it must be unstable. And if it is unstable, it wouldn't be a vacuum state, and it wouldn't have lasted an infinite time before becoming unstable.

The quantum-tunneling hypothesis, is not actually a boundary condition on the space-time fields, but on the Wheeler Dewitt equation. However, the Wheeler Dewitt equation, acts on the infinite dimensional space of all fields on a hyper surface, and is not well defined. Also, the 3+1, or 10+1 split, is putting apart that which God, or Einstein, has joined together. In my opinion therefore, neither the pre bang scenario, nor the quantum-tunneling hypothesis, are viable.

 
An example of compactification: At large distances, a two dimensional surface with one circular dimension looks one-dimensional.


To determine what happens in the universe, we need to specify the boundary conditions, on the field configurations, that are summed over in the path integral. One natural choice, would be metrics that are asymptotically Euclidean, or asymptotically anti d e Sitter. These would be the relevant boundary conditions for scattering calculations, where one sends particles in from infinity, and measures what comes back out. However, they are not the appropriate boundary conditions for cosmology.

We have no reason to believe the universe is asymptotically Euclidean, or anti de Sitter. Even if it were, we are not concerned about measurements at infinity, but in a finite region in the interior. For such measurements, there will be a contribution fro m metrics that are compact, without boundary. The action of a compact metric is given by integrating the Lagrangian. Thus its contribution to the path integral is well defined. By contrast, the action of a non-compact or singular metric involves a surface term at infinity, or at the singularity. One can add an arbitrary quantity to this surface term. It therefore seems more natural to adopt what Jim Hartle and I called the no boundary proposal. The quantum state of the universe is defined by a Euclidean p ath integral over compact metrics. In other words, the boundary condition of the universe is that it has no boundary.

There are compact Reechi flat metrics of any dimension, many with high dimensional modulie spaces. Thus eleven-dimensional supergravity, or M theory, admits a very large number of solutions and compactifications. There may be some principle that we haven' t yet thought of, that restricts the possible models to a small sub class, but it seems unlikely. Thus I believe that we have to invoke the Anthropic Principle. Many physicists dislike the Anthropic Principle. They feel it is messy and vague, it can be us ed to explain almost anything, and it has little predictive power. I sympathize with these feelings, but the Anthropic Principle seems essential in quantum cosmology. Otherwise, why should we live in a four dimensional world, and not eleven, or some other number of dimensions. The anthropic answer is that two spatial dimensions, are not enough for complicated structures, like intelligent beings. On the other hand, four or more spatial dimensions would mean that gravitational and electric forces would fall off faster than the inverse square law. In this situation, planets would not have stable orbits around their star, nor electrons have stable orbits around the nucleus of an atom. Thus intelligent life, at least as we know it, could exist only in four dimensions. I very much doubt we will find a non anthropic explanation. The Anthropic Principle is usually said to have weak and strong versions. According to the strong Anthropic Principle, there are millions of different universes, each with different values of the physical constants. Only those universes with suitable physical constants will contain intelligent life. With the weak Anthropic Principle, there is only a single universe. But the effective couplings are supposed to vary with position, and intelligent life occurs only in those regions, in which the couplings have the right values. However, quantum cosmology, and the no boundary proposal remove the distinction between the weak and strong Anthropic Principles. The different physical constants are just different modulie of the internal space, in the compactification of M theory, or eleven-dimensional supergravity. All possible modulie will occur in the path integral over compact metrics. By contrast, if the path integral were over non compact metrics, one would have to specify the values of the modulie at infinity. But why should the modulie at infinity, have those particular values, like four uncompactified dimensions, that allow intelligent life. In fact, the Anthropic Principle, really requires the no boundary proposal, and vice-versa. One can make the Anthropic Principle precise, by using Bayes statistics.

 
The fundamental objects of string theory are open and closed strings.


One takes the a-priori probability of a class of histories, to be the e to the minus the Euclidean action, given by the no boundary proposal. One then weights this a-priori probability, with the probability that the class of histories contain intelligent life. As physicists, we don't want to be drawn into to the fine details of chemistry and biology, but we can reckon certain features, as essential prerequisites of life as we know it. Among these are the existence of galaxies and stars, and physical constants near what we observe. There may be some other region of modulie space, that allows some different form of intelligent life, but it is likely to be an isolated island. I shall therefore ignore this possibility, and just weight the a-priori probability , with the probability to contain galaxies.

The simplest compact metric that could represent a four dimensional universe, would be the product of a four sphere, with a compact internal space. But the world we live in has a metric with Lorentzian signature, rather than a positive definite Euclidean one. So one has to analytically continue the four-sphere metric, to complex values of the coordinates.

There are several ways of doing this.

One can analytically continue the coordinate, sigma, as sigma equator, plus i t. One obtains a Lorentzian metric, which is a closed Friedmann solution, with a scale factor that goes like cosh Ht. So this is a closed universe that collapses to a minimum size, and then expands exponentially again.

However, one can analytically continue the four-sphere in another way. Define t = i sigma, and chi = i psi. This gives an open Friedmann universe, with a scale factor like sinh Ht.

Thus one can get an apparently spatially infinite universe, from the no boundary proposal. The reason is that one is using as a time coordinate, the hyperboloids of constant distance, inside the light cone of a point in de Sitter space. The point itself, and its light cone, are the big bang of the Friedmann model, where the scale factor goes to zero. But they are not singular. Instead, the spacetime continues through the light cone to a region beyond. It is this region that deserves the name, the pre big bang scenario, rather than the misguided model that commonly bears that title. If the Euclidean four-sphere were perfectly round, both the closed and open analytical continuations, would inflate for ever. This would mean they would never form galaxies. A perfect round four sphere has a lower action, and hence a higher a-priori probability than any other four metric of the same volume. However, one has to weight this probability, with the probability of intelligent life, which is zero. Thus we can forget about round 4 spheres. On the other hand, if the four sphere is not perfectly round, the analytical continuation will start out expanding exponentially, but it can change over later to radiation or matter dominated, and can become very large and flat. This provides a mechanism whereby all eleven dimensions can have similar curvatures, in the compact Euclidean metric, but four dimensions can be much flatter than the other seven, in the Lorentzian analytical continuation. But the mechanism doesn't seem specific to four large dimensions. So we will still need the Anthropic Principle, to explain why the world is four-dimensional.

In the semi classical approximation, which turns out to be very good, the dominant contribution, comes from metrics near solutions of the Euclidean field equations. So we need to study deformed four spheres, in the effective theory obtained by dimensional reduction of eleven dimensional supergravity, to four dimensions. These Kaluza Klein theories, contain various scalar fields, that come from the three index field, and the modulie of the internal space. For simplicity, I will describe only the single sca lar field case.

The scalar field, phi, will have a potential, V of phi. In regions where the gradients of phi are small, the energy momentum tensor will act like a cosmological constant, Lambda =8 pi G V, where G is Newton's constant in four dimensions. Thus it will curve the Euclidean metric, like a four-sphere. However, if the field phi is not at a stationary point of V, it can not have zero gradient everywhere. This means that the solution can not have O5 symmetry, like the round four sphere. The most it can have, is O4 symmetry. In other words, the solution is a deformed four sphere.

One can write the metric of an O4 instanton, in terms of a function, b of sigma. Here b is the radius of a three sphere of constant distance, sigma, from the north pole of the instanton. If the instanton were a perfectly round four-sphere, b would be a sine function of sigma. It would have one zero at the north pole, and a second at the south pole, which would also be a regular point of the geometry. However, if the scalar field at the north pole, is not at a stationary point of the potential, it will var y over the four sphere. If the potential is carefully adjusted, and has a false vacuum local minimum, it is possible to obtain a solution that is non-singular over the whole four-sphere. This is known as the Coleman De Lucia instanton. However, for general potentials without a false vacuum, the behavior is different. The scalar field will be almost constant over most of the four-sphere, but will diverge near the south pole. This behavior is independent of the precise shape of the potential, and holds for any polynomial potential, and for any exponential potential, with an exponent, a, less then 2. The scale factor, b, will go to zero at the south pole, like distance to the third. This means the south pole is actually a singularity of the four dimensional geometry. However, it is a very mild singularity, with a finite value of the trace K surface term, on a boundary around the singularity at the south pole. This means the actions of perturbations of the four dimensional geometry, are well defined, despite the singularity. One can therefore calculate the fluctuations in the microwave background, as I shall describe later.

The deep reason, behind this good behavior of the singularity, was first seen by Garriga. He pointed out that if one dimensionally reduced five dimensional Euclidean Schwarzschild, along the tau direction, one would get a four-dimensional geometry, and a scalar field. These were singular at the horizon, in the same manner as at the south pole of the instanton. In other words, the singularity at the south pole, can be just an artifact of dimensional reduction, and the higher dimensional space, can be non singular. This is true quite generally. The scale factor, b, will go like distance to the third, when the internal space, collapses to zero size in one direction.

When one analytically continues the deformed sphere to a Lorentzian metric, one obtains an open universe, which is inflating initially.

One can think of this as a bubble in a closed de Sitter like universe. In this way, it is similar to the single bubble inflationary universes that one obtains from Coleman De Lucia instantons. The difference is that the Coleman De Lucia instantons require d carefully adjusted potentials, with false vacuum local minima. But the singular Hawking-Turok instanton, will work for any reasonable potential. The price one pays for a general potential, is a singularity at the south pole. In the analytically continue d Lorentzian space-time, this singularity would be time like, and naked. One might think that anything could come out of this naked singularity, and propagate through the big bang light cone, into the open inflating region. Thus one would not be able to predict what would happen. However, as I already said, the singularity at the south pole of the four sphere, is so mild, that the actions of the instanton, and of perturbations around it, are well defined. This behavior of the singularity means one can determine the relative probabilities of the instanton, and of perturbations around it. The action of the instanton itself is negative, but the effect of perturbations around the instanton, is to increase the action, that is, to make the action less negative. According to the no boundary proposal, the probability of a field configuration, is e to minus its action. Thus perturbations around the instanton have a lower probability, than the unperturbed background . This means that quantum fluctuation are suppressed, the bigger the fluctuation, as one would hope. This is not the case with some versions of the tunneling boundary condition.

How well do these singular instantons, account for the universe we live in? The hot big bang model seems to describe the universe very well, but it leaves unexplained a number of features.

 
Properties of n+m-dimensional spacetimes.


First is the isotropy. Why are different regions of the microwave sky, at very nearly the same temperature, if those regions have not communicated in the past? Second, despite this overall isotropy, why are there fluctuations of order one part in 10 to the minus 5, with a fairly flat spectrum? Third, why is the density of matter, still so near the critical value, when any departure would grow rapidly with time? Fourth, why is the vacuum energy, or effective cosmological constant, so small, when symmetry b reaking might lead one to expect a value ten to the 80 higher? In fact, the present matter and vacuum energy densities can be regarded as two axes in a plane of possibilities. For some purposes, it is better to deal with the linear combinations, matter plus vacuum energy, which is related to the curvature of space. And matter minus twice vacuum energy, which gives the deceleration of the universe.

Inflation was supposed to solve the problems of the hot big bang model. It does a good job with problem one, the isotropy of the universe. If the inflation continues for long enough, the universe would now be spatially flat, which would imply that the sum of the matter and vacuum energies had the critical value. But inflation by itself, places no limits on the other linear combination of matter and vacuum energies, and does not give an answer to problem two, the amplitude of the fluctuations. These have t o be fed in, as fine tunings of the scalar potential, V. Also, without a theory of initial conditions, it is not clear why the universe should start out inflating in the first place.

The instantons I have described predict that the universe starts out in an inflating, de Sitter like state. Thus they solve the first problem, the fact that the universe is isotropic. However, there are difficulties with the other three problems. According to the no boundary proposal, the a-priori probability of an instanton, is e to the minus the Euclidean action. But if the Reechi scalar is positive, as is likely for a compact instanton with an isometry group, the Euclidean action will be negative.

The larger the instanton, the more negative will be the action, and so the higher the a-priori probability. Thus the no boundary proposal, favors large instantons. In a way, this is a good thing, because it means that the instantons are likely to be in the regime, where the semi classical approximation is good. However, a larger instanton, means starting at the north pole, with a lower value of the scalar potential, V. If the form of V is given, this in turn means a shorter period of inflation. Thus the universe may not achieve the number of e-foldings, needed to ensure omega matter, plus omega lambda, is near to one now. In the case of the open Lorentzian analytical continuation considered here, the no boundary a-priori probabilities, would be heavily weighted towards omega matter, plus omega lambda, equals zero. Obviously, in such an empty universe, galaxies would not form, and intelligent life would not develop. So one has to invoke the anthropic principle.

If one is going to have to appeal to the anthropic principle, one may as well use it also for the other fine tuning problems of the hot big bang. These are the amplitude of the fluctuations, and the fact that the vacuum energy now, is incredibly near zero . The amplitude of the scalar perturbations depends on both the potential, and its derivative. But in most potentials, the scalar perturbations are of the same form as the tensor perturbations, but are larger by a factor of about ten. For simplicity, I sh all consider just the tensor perturbations. They arise from quantum fluctuations of the metric, which freeze in amplitude when their co-moving wavelength, leaves the horizon during inflation.

Thus amplitude of the tensor perturbation, will thus be roughly one over the horizon size, in Planck units. Longer co-moving wavelengths, leave the horizon first during inflation. Thus the spectrum of the tensor perturbations, at the time they re-enter the horizon, will slowly increase with wavelength, up to a maximum of one over the size of the instanton.

The time, at which the maximum amplitude re-enters the horizon, is also the time at which omega begins to drop below one. One has two competing effects. The a-priori probability from the no boundary proposal wants to make the instantons large, and probability of the formation of galaxies, which requires that both omega, and the amplitude of the fluctuations, not be too small. This would give a sharp peak in the probability distribution for omega, of about ten to the minus three. The probability for the tensor perturbations will peak at order ten to the minus eight. Both these values, are much less than what is observed. So what went wrong. We haven't yet taken into account the anthropic requirement, that the cosmological constant is very small now. Eleven dimensional supergravity contains a three-form gauge field, with a four-form field strength. When reduced to four dimensions, this acts a s a cosmological constant. For real components in the Lorentzian four-dimensional space, this cosmological constant is negative. Thus it can cancel the positive cosmological constant, that arises from super symmetry breaking. Super symmetry breaking is an anthropic requirement. One could not build intelligent beings from mass less particles. They would fly apart.

Unless the positive contribution from symmetry breaking cancels almost exactly with the negative four form, galaxies wouldn't form, and again, intelligent life wouldn't develop. I very much doubt we will find a non anthropic explanation for the cosmological constant.

In the eleven dimensional geometry, the integral of the four-form over any four cycle, or its dual over any seven cycle, have to be integers. This means that the four-form is quantized, and can not be adjusted to cancel the symmetry breaking exactly. In f act, for reasonable sizes of the internal dimensions, the quantum steps in the cosmological constant, would be much larger than the observational limits. At first, I thought this was a set back for the idea there was an anthropically controlled cancellati on of the cosmological constant. But then, I realized that it was positively in favor. The fact that we exist shows that there must be a solution to the anthropic constraints.

But, the fact that the quantum steps in the cosmological constant are so large means that this solution is probably unique. This helps with the problem of low omega I described earlier. If there were several discrete solutions, or a continuous family of t hem, the strong dependence of the Euclidean action on the size of the instanton, would bias the probability to the lowest omega and fluctuation amplitude possible. This would give a single galaxy in an otherwise empty universe, not the billions we observe . But if there is only one instanton in the anthropically allowed range, the biasing towards large instantons, has no effect. Thus omega matter and omega lambda, could be somewhere in the anthropically allowed region, though it would be below the omega matter plus omega lambda =1 line, if the universe is one of these open analytical continuations. This is consistent with the observations.

The red eliptic region, is the three sigma limits of the supernova observations. The blue region is from clustering observations, and the purple is from the Doppler peak in the microwave. They seem to have a common intersection, on or below the omega total =1 line.

Assuming that one can find a model that predicts a reasonable omega, how can we test it by observation? The best way is by observing the spectrum of fluctuations, in the microwave background. This is a very clean measurement of the quantum fluctuations, a bout the initial instanton. However, there is an important difference between the non-singular Coleman De Lucia instantons, and the singular instantons I have described. As I said, quantum fluctuations around the instanton are well defined, despite the singularity. Perturbations of the Euclidean instanton, have finite action if and only, they obey a Dirichelet boundary condition at the singularity. Perturbation modes that don't obey this boundary condition, will have infinite action, and will be suppressed. The Dirichelet boundary condition also arises, if the singularity is resolved in higher dimensions.

When one analytically continues to Lorentzian space-time, the Dirichelet boundary condition implies that perturbations reflect at the time like singularity.

This has an effect on the two-point correlation function of the perturbations, but it seems to be quite small. The present observations of the microwave fluctuations are certainly not sensitive enough to detect this effect. But it may be possible with the new observations that will be coming in, from the map satellite in two thousand and one, and the Planck satellite in two thousand and six.


Thus the no boundary proposal, and the pea instanton, are real science. They can be falsified by observation.