Tuesday, February 23, 2010

What Does a Singularity Look Like?


Cosmology: The Higgs Singularity

The quantum uncertainty of the Higgs vacuum fluctuation singularity that exploded into the big bang over thirteen billions years ago was certainly indifferent to the birth a universe with life on Earth which now suffers from the endless burden of seeking, “Why?”

Why symmetry breaking singularities? Why an Uncertainty Principle? Why, why why….

Some answers appear in tantalizing dribs and drabs from inspired ideas and ingenious experiments. They produce knowledge in the form of Astrophysics which is one branch of astronomy that explores the physics of the universe: galaxies, stars, planets, exoplanets, the interstellar medium and the cosmic microwave background. Or Cosmology which is a sub-branch of astrophysics studying the origin of large scale properties of the universe including the big bang. What imperfect answers do these endeavors offer? Let’s look.

In 1976, particle physics developed the Standard Model of Quantum Mechanics representing the three forces (electromagnetic, weak and strong) as U(1) x SU(2) x SU(3) with exceptional accuracy. Each of the forces in the Standard Model is a product of gauge theories with complex charges that mutually interact in a highly symmetric way.

The development of Grand Unified Theory (GUT) was intended to unify electromagnetic, weak and strong interactions of the Standard Model in terms of a single fully unified interaction. While this still left out gravity it was expected to be an excellent approximation to nature. The GUT gave a specific reason for the charge symmetry of electrons and protons. Also super-symmetry gave an intersecting point where all three force’s coupling constants merged at 10^16 Gev. GUT predicted that there were two early universe phase transitions. The first would be at a high temperature phase transition breaks the gravity symmetry from the GUT forces and generates a large amount of supercooling and delaying the GUT second phase transition which would occur suppressing monopoles production. The supercooling causes the second phase transition to occur at a temperature well below the normal phase transition temperature before the actual transition occurs. Water can be supercooled to 20 degrees below freezing before it turns to ice. With GUT transition postponed until after supercooling the correct monopole production rate can be expected. In addition supercooling would cause a false vacuum that leads to a strong gravitational repulsion that creates an exponential inflationary expansion of the universe. In effect the false vacuum creates a gravitational effect identical to Einstein’s cosmological constant. The expansion doubling time is 10^-37 seconds. Therefore 100 doublings (10^30 time its original size) would take only 10^-35 seconds.

The superecooling phase transition is of First Order that cause Inflationary expansion of the universe.


In July 2012, The Higgs Boson was discovered.

The singularity that broke symmetry changing one force into three/four forces occurred in stages starting at 10^16 Gev, 10^29 degree Kelvin, 10^-39 seconds after the big bang. The original single force coupling constant changed into four coupling constants that began to diverge in value. The matter-antimatter ratio was slightly imbalanced and matter became slightly dominate due to cooling shutting off prior to baryon neucleogenesis reaching equilibrium (10^78 baryons now in the universe). During the big bang inflationary expansion proceeded at faster than the speed of light.

General Relativity is an extremely accurate theory for gravity but is classical and does not include quantum effects. The inability to reconcile the two theories is due to the appearance of infinity expressions that cannot be renormalized.



Radiation: is composed of massless or nearly massless particles that move at the speed of light including photons (light) and neutrinos. Their emissions are examined across all parts of the electromagnetic spectrum.

Mass is E/c^2 occurs a quark (or particle) is disturbed in a gluon field.

Photons are moving disturbance within an electromagnetic field.

Baryonic matter: is ordinary matter composed primarily of protons, neutrons and electrons. Dark energy: is a property of the vacuum itself, characterized by negative pressure (repelling force) causing the expansion of the universe to accelerate, or speed up. Dark matter: exotic non-baryonic matter that interacts only weakly with ordinary matter. The Big Bang employs two critical ideas: General Relativity and the Cosmological Principle. Matter distributed uniformly allows computing the property of space-time using General Relativity. It was a simultaneous explosion of space everywhere in the universe rather than a single point explosion.

Inflation was a phenomena similar to Phase transition after the big band causing super-cooling, super-expansion symmetry breaking splitting the forces at time 10^-37 seconds after big bang. Inflation stopped the production of Monopoles, explains the flatness problem (Euclidean geometry preference) the critical density omega = 1.0.
The phase transition was a 1st Order Phase Transition (like boiling water with supercooling). This is the second phase transition to occur after the Big Bang. From 10^-37 to 10^-35 seconds. It produced a false vacuum which produced a neg. pressure repulsing gravitons to make the cosmological constant increase the size of the universe by a factor of 10^30.



First order phase transition discontinuities in the inflation universe

The first series of applications out elementary catastrophe theory deals with thermodynamics phase transition Ginzburg Landau second order phase transition
we relate the critical point of the fluid to the cusp catastrophe and also relate this to inflationary universe by Alan

Classical theory phase transition is naturally related to elementary catastrophe dairy. The general family of potential functions depending on  a in state variables or parameters.

We left the state of the physical system be described by the value X that minimizes the potential locally. The physical system is then reduced to a study of equilibrium and stability properties of the potential function V(x,c).

The first derivative of the function is equal to zero at equilibrium and the second are shown the river it is greater than zero indicating a local stability as well as the critical values of the stable equilibrium branches.

In general the potential function the will have only isolated critical points. A phase transition occurs when the point asked of scribing this state of the physical system jams from one critical branch to another.

Phase transitions can I curve when the control parameters are varied. The control parameters are assumed to depend upon a single time parameter. A phase transition will occur when the curve crosses and appropriate point. The bifurcation set on which the local minimum are created or destroyed for this curve the transition is of order and if the limits of the derivative goes to zero. Phase transitions in nature usually orange zero first or second order.


ECT

A ‘singularity’ is a point where mathematical models are no longer valid ‒ for example: a point divided by zero is undefined. The theory of singularities examines mathematical manifolds in an abstract space to gain a topological representation of the region near a singularity.

In the 1960’s Rene’ Thom proposed a nonlinear mathematics approach to describe singularities called Catastrophe Theory. Thom classified the bifurcations based upon their potential function and its derivatives. The morphology of solutions is determined by values of the potential’s parameters. In the special case of gradient vector fields a rigorous mathematics results called Elementary Catastrophe Theory (ECT).

Gradient vector fields are interesting because nearly all trajectories on the behavior surface tend toward a point attractor and the attractor minimizes the potential function V of the system. The parameters determine the locations of the relative minima. A smooth change in the parameters can give rise to a discontinuous jump on the behavior surface.

Thom found that under stable conditions there are exactly seven elementary catastrophes if the potential function has no more than two parameters. The most illustrative is the Cusp Catastrophe with a potential of:

V(x, a, b) = (x^4)/4 + A(x^2)/2 + Bx.

The Cusp has two control variables A and B where x satisfies

dV/dx = 0,

shown in figure 1.

Outside the cusp region there is only one extrema value for x. Inside the cusp, there are two different values of x giving local minima of V(x) for each.

Cusp shape in parameter space (A, B) near the catastrophe point shows the locus of fold bifurcations separating the region with two stable solutions from the region with one.

But the bifurcation curve loops back on itself, giving a second branch where the alternate solution loses stability and jumps back to the original solution space. You can observe hysteresis loops as the system follows one solution and jumps to the other [1].



Figure 1 Cusp Catastrophe

Consider if one holds B constant and varies A to follow path 1 or 2. In the symmetrical case B = 0, a pitchfork bifurcation occurs as A is reduced. One stable solution suddenly splitting into two stable solutions and one unstable solution as the physical system passes to A < 0 through the cusp point (0,0) (spontaneous symmetry breaking). Away from the cusp point, there are no sudden changes.

References Alesso [2-5] illustrate Elementary Catastrophe Theory applications.

REFERENCES:

[1] Weinberg, S., "The First Three Minutes," Basic Books, NY, NY, 1977.

[2] Guth, A. H., "The Inflationary Universe," Basic Books, NY, NY, 1997.


[3] Wikipedia: Catastrophe Theory

[4] Alesso, H. P., “On the Instabilities of an Externally Loaded Shell” INTERNATIONAL JOURNAL OF NONLINEAR MECHANICS, Vol. 17, No. 2, pp-85-103,1982.

[5] Alesso, H. P., and Smith, C. F., “On the Classifying the Deformation Shape of the Liquid Drop Model” IL NUOVO CIMENTO, Vol. 66, pp 272-282, 1981.

[6] Alesso, H. P., “Elementary Catastrophe Modeling of an End-Loaded Ring In a Rigid Cavity” NUCLEAR ENGINEERING AND DESIGN, 1978.

Monday, February 22, 2010

Singularity Metrics: Manycore Processors

As the billions of smart devices using single microprocessors are replaced by manycore ‘brains,’ we can expect to reach trillions of smart chips conducting much more efficient parallel processing within just seven years. The Era of Moore’s Law will give way to the Era of Amdahl’s Law [1].

During the Era of Moore’s Law, miniaturized microprocessors produced smaller faster computers. Manycore systems are now replacing the performance hierarchy with innovative efficiency thereby redefining the Information Revolution.

Moore’s Law is the empirical observation that the capacity of chips doubles every 18 months. As physical size of chips grew, the density and complexity of the circuits increased. In 2002, Intel planned on achieving 30-gigahertz chips by today using 10 nanometers technology. But Intel was wrong.

Chip makers are still using four gigahertz, and the future has shifted from obtaining greater single processor to exploiting manycore processors.

Manycore processors provide high density computer processing power with scalability and less heat. Just as the transistor replaced the vacuum tube, manycore systems are now replacing the single microprocessor system, as more efficient, cheaper, and more reliable components.

Conventional wisdom for PCs predicates a doubling of the number of cores on a chip for each new silicon generation. Within a few years there will be 100 core machines. Applications will require new concurrent programs. Windows 7 and Windows Server 2008 can now work with up to 256 logical processors. There is conviction that reaching 1000 cores on a die with 30nm technology is possible. Cisco already has routers with 188 cores through using 130 nm technologies (see Figure 1).

The goal is easy-to-write programs executing efficiently on highly parallel systems using 1000s of cores per chip.

This means that growing software complexity will require fundamentally rethinking architecture and shifting the paradigm from Moore’s Law to Amdahl’s Law.

Amdahl’s Law given by:

Speedup ≤ 1 / (F + (1-F) / N)




Figure 1

Amdahl's law describes how much a program can theoretically be sped up by additional computing resources, based the proportion of parallelizable and serial components. Where F is the fraction of calculation that must be executed serially given as [2]:

F = s / (s + p)

where s = serial execution and p = parallel execution.

Then Amdahl's law says that on a machine with N processors, as N approaches infinity, the maximum speedup converges to

1/F, or (s + p)/s.

What does this mean for the metrics technology growth?

It means fast just got faster.

REFERENCES:

1. Alesso, H. P., Connections: Patterns of Discovery, John Wiley & Sons Inc., New York, NY, 2008.

2. Goetz, B., et. al., Java: Concurrency in Practice, Addison-Wesley, Stoughton, Massachusetts, USA, 2008.