Coffeeshop Physics
by Jim Pivarski

Long articles on this site Short CMS Results on Fermilab Today Short Physics in a Nutshells on Fermilab Today Talks and older articles

Physics in a Nutshell

I wrote the articles below for Fermilab Today. Each one touches on a different topic in physics.

May 28, 2015: Magnets for measurements

Magnet systems in modern particle physics experiments are used to analyze particle charge and momentum, but the field is strong enough and covers enough volume to give a whale an MRI exam.

Broadly speaking, a modern particle physics detector has three main pieces: (1) tracking, which charts the course of charged particles by letting them pass through thin sensors, (2) calorimetry, which measures the energy of charged or neutral particles by making them splat into a wall and (3) a strong magnetic field. Unlike tracking and calorimetry, the magnet doesn’t detect the particles directly — it affects them in revealing ways.

Magnetic fields curve the paths of charged particles, and the direction of curvature depends on whether the particle is positively or negatively charged. Thus, a tracking system with a magnetic field can distinguish between matter and antimatter. In addition, the deflection is larger for slow, low-momentum particles than it is for fast, high-momentum ones. Fast particles zip right through while slow ones loop around, possibly several times.

Both effects were used to discover positrons in 1932. A cloud chamber (tracking system) immersed in a strong magnetic field revealed particles that curved the wrong way to be negatively charged electrons, yet were also too fast to be positively charged protons. The experimenters concluded that they had discovered a new particle, similar to electrons, but positively charged. It turned out to be the first evidence of antimatter.

Today, most particle physics experiments feature a strong magnet. The radius of curvature of each particle’s track precisely determines its momentum. In many experiments, these magnets are stronger than the ones used to conduct MRI scans in hospitals, yet are also large enough to fit a whale inside.

Most of these magnets work the same way as a hand-held electromagnet: a DC current circulates in a coiled wire to produce a magnetic field. However, particle physics magnets are often made of superconducting materials to achieve extremely high currents and field strengths. Some magnets, such as the one in CMS, are cylindrical for more precision at right angles to the beamline, while others, such as ATLAS’s outer magnet, are toroidal (doughnut-shaped) for more precision close to the beamline. In some cases, an experiment without a built-in magnet can surreptitiously make use of natural magnetic fields: the Fermi-LAT satellite used the Earth’s magnetic field to distinguish positrons from electrons.

Since the particle momentum that a magnetized tracking system measures is closely related to the particle energy that a calorimeter measures, the two can cross-check each other, be used in combination or reveal the particles that are invisible to tracking alone. Advances in understanding often come from different ways of measuring similar things.

Apr 2, 2015: Observe neutral particles with this one weird trick

A shower produces dozens of particles that could be observed individually (inset figure) or collectively in a calorimeter (bottom).

The previous Physics in a Nutshell introduced tracking, a technique that allows physicists to see the trajectories of individual particles. The biggest limitation of tracking is that only charged particles ionize the medium that forms clouds, bubbles, discharges or digital signals. Neutral particles are invisible to any form of tracking.

Calorimetry, which now complements tracking in most particle physics experiments, takes advantage of a curious effect that was first observed in cloud chambers in the 1930s. Occasionally, a single high-energy particle seemed to split into dozens of low-energy particles. These inexplicable events were called “bursts,” “explosions” or “die Stöße.” Physicists initially thought they could only be explained by a radical revision of the prevailing quantum theory.

As it turns out, these events are due to two well-understood processes, iterated ad nauseam. Electrons and positrons recoil from atoms of matter to produce photons, and photons in matter split to form electron-positron pairs. Each of these steps doubles the total number of particles, turning a single high-energy particle into many low-energy particles.

This cascading process is now known as a shower. The cycle of charged particles creating neutral particles and neutral particles creating charged particles can be started by either type, making it sensitive to any particle that interacts with matter, including neutral ones. Although the shower process is messy, the final particle energies should add up to the original particle’s energy, providing a way to measure the energy of the initial particle — by destroying it.

Modern calorimeters initiate the shower using a heavy material and then measure the energy using ordinary light sensors. To accurately measure the energy of the final photons, this heavy material should also be transparent. Crystals are a common choice, as are lead-infused glass, liquid argon and liquid xenon.

Not all calorimeters are man-made. Neutrinos produce electrons in water or ice, which cascade into showers of electrons, positrons and photons. The IceCube experiment uses a cubic kilometer of Antarctic ice to observe PeV neutrinos — a hundred times more energetic than the LHC’s beams. Cosmic rays form showers in the Earth’s atmosphere, producing about 4 watts of ultraviolet light and billions of particles. The Pierre Auger Observatory uses sky-facing cameras and 3,000 square kilometers of ground-based detectors to capture both and has measured particles that are a million times more energetic than the LHC’s beams.

Mar 19, 2015: Happy trails

This shows a particle identified in a photograph of a bubble chamber (left) and a computer reconstruction of signals from a silicon tracker (right).

Much of the complexity of particle physics experiments can be boiled down to two basic types of detectors: trackers and calorimeters. They each have strengths and weaknesses, and most modern experiments use both. This and the next Physics in a Nutshell are about trackers and calorimeters, to kick off a series about detectors in general.

The first tracker started out as an experiment to study clouds, not particles. In the early 1900s, Charles Wilson built an enclosed sphere of moist air to study cloud formation. Dust particles were known to seed cloud formation — water vapor condenses on the dust to make clouds of tiny droplets. But no matter how clean Wilson made his chamber, clouds still formed.

Moreover, they formed in streaks, especially near radioactive sources. It turned out that subatomic particles were ionizing the air, and droplets condensed along these trails like dew on a spider web.

This cloud chamber was phenomenally useful to particle physicists — finally, they could see what they were doing! It’s much easier to find strange, new particles when you have photos of them acting strangely. In some cases, they were caught in the act of decaying — the kaon was discovered as a V-shaped intersection of two pion tracks, since kaons decay into pairs of pions in flight.

In addition to turning vapor into droplets, ionization trails can cause bubbles to form in a near-boiling liquid. Bubble chambers could be made much larger than cloud chambers, and they produced clear, crisp tracks in photographs. Spark chambers used electric discharges along the ionization trails to collect data digitally. More recently, time projection chambers measure the drift time of ions between the track and a high-voltage plate for more spatial precision, and silicon detectors achieve even higher resolution by collecting ions on microscopic wires printed on silicon microchips. Today, trackers can reconstruct millions of three-dimensional images per second.

The disadvantage of tracking is that neutral particles do not produce ionization trails and hence are invisible. The kaon that decays into two pions is neutral, so you only see the pions. Neutral particles that never or rarely decay are even more of a nuisance. Fortunately, calorimeters fill in this gap, since they are sensitive to any particle that interacts with matter.

Interestingly, the Higgs boson was discovered in two decay modes at once. One of these, Higgs to four muons, uses tracking exclusively, since the muons are all charged and deposit minimal energy in a calorimeter. The other, Higgs to two (neutral) photons, uses calorimetry exclusively, which will be the subject of the next Nutshell.

Feb 5, 2015: The universe takes sides

A photograph from the day that Sweden switched from driving on the left to driving on the right. A physicist might call this a change in chirality, since one driving pattern was replaced by its mirror image. Photo courtesy of rarehistoricalphotos.com

One morning in 1967, all cars in Sweden had to stop, move over to the right side of the road, wait 10 minutes and then resume driving. From that day forward, Sweden has been a drive-on-the-right country, like its closest neighbors and most of Europe. One may argue that left versus right is pure convention, but conventions are contagious: It’s much easier when they match.

People aren’t as left-right symmetric as they appear. At an anatomic level, human hearts are slightly on the left and livers are predominantly on the right. The differences are even more striking at a molecular level — molecules and their mirror-image counterparts have completely different biological effects. For instance, the mirror image of the molecule that gives mint its taste is the flavor of caraway seeds. More dramatically, life on Earth is composed purely of left-handed amino acids. Right-handed versions of these molecules exist, but early microbes decided against them.

Still, this seems to be a matter of convention, or at least an accident of evolution. At the most fundamental level, is there a difference between left and right? Surprisingly, the answer seems to be yes. The weak force, uniquely among the four fundamental forces, can only be felt by left-handed particles (and right-handed antiparticles). Particle physics interactions involving W bosons simply do not have mirror-image counterparts.

We can make a distinction between a left-handed particle and a right-handed particle by how its spin aligns with its trajectory. If you point the thumb of your right hand in the direction of a particle’s motion and your fingers curl in the direction that it spins, then the particle is right-handed and is completely invisible to the weak force. A particle that spins in the direction that your left fingers curl is perfectly susceptible to the weak force.

The idea that a fundamental interaction of the universe would prefer left over right is so strange that some physicists hypothesize that it only looks that way because we’re not seeing the whole picture. Suppose that the W boson we know is half of a pair: that there’s a W’ boson that only interacts with right-handed particles and that we haven’t discovered it yet because its mass is more than current particle colliders can produce. Curiously, this theory could also explain the small yet nonzero mass of the neutrino.

In this theory, the universe would have no fundamental left-right preference, but slight fluctuations in the early universe made the W boson light and the W’ heavy in some regions of space. Once seeded, that asymmetry spread, like the microbes that favored left-handed amino acids and the drivers that favored the right-hand side of the road. If this theory is true, then there could even be parts of the universe where the other convention was chosen, in much the same way that most island nations drive on the left. They get away with it because they’re islands.

Jan 8, 2015: Freezing particle beams and other things

This spiderweb was overtaken by a sudden frost.

The last few days have been cold. Really cold. It prompted me to make some comparisons: cold as outer space, cold as laser beams, cold as an electron — but most of these are nonsensical. The concept of temperature only makes sense in settings with large numbers of particles and a diffuse distribution of energy, such as our everyday world. A single particle doesn’t have a temperature, and the temperature of a laser beam is negative (below absolute zero).

Temperature is a relationship between the energy of a system of particles and its complexity, also known as entropy (see “Entropy is not disorder”). An ice crystal is an example of a simple, or low-entropy, state because the water molecules are aligned in rigid positions. Hot water is a more complex, high-entropy state because there are so many ways that the water molecules can tumble over one another. If it takes a lot of energy to increase the entropy a little, then the temperature is high. If it doesn’t take much, then the temperature is low.

This is why an isolated particle has no temperature. It has energy but no entropy because there is only one configuration — like a single Lego brick, all alone. A bunch of particles, such as the electrons or protons in a particle beam, has energy and entropy, so you can measure the beam’s temperature. If all the particles are in rigid lockstep and racing down the beam pipe with a lot of energy, it is a low-temperature beam, like a flying ice cube. In fact, cooling is an important step in preparing the beam: The random motions of the initial gas must be dampened while its collective motion is accelerated. The energy of the collective motion doesn’t count toward temperature because changing the overall speed doesn’t change the entropy.

Lasers are even more exotic. The source of a laser beam is made of atoms with only two (relevant) energy states: on and off. When more than half of the atoms are “on,” the laser’s entropy decreases, rather than increases. Half-on and half-off is a more complex state than all-on. Thus, as the energy increases, turning more atoms “on,” the entropy decreases, and the temperature is negative.

Even in this bizarre case of negative temperature, absolute zero cannot be reached. Zero temperature means that a change in energy would cause an infinite change in entropy, and since entropy is ultimately a count of the number of configurations, it cannot be infinite. Laser beams approach but do not reach absolute zero from below, just as normal systems approach it from above.

So how cold is it outside, in an absolute sense? Zero degrees Fahrenheit, relative to absolute zero, is 10 percent colder than 45 degrees Fahrenheit, so if you’re looking for a strong statement about the weather, we’re a 10th part closer to absolute zero than we were two weeks ago.

Nov 20, 2014: Heisenberg's uncertainty principle and Wi-Fi

Bandwidth, or the spreading of a radio station onto multiple, neighboring frequencies, is related to uncertainty in quantum mechanics.

When I first started teaching, I was stumped by a student who asked me if quantum mechanics affected anything in daily life. I said that the universe is fundamentally quantum mechanical and therefore it affects everything, but this didn’t satisfy him. Since then, I’ve been noticing examples everywhere.

One surprising example is the effect of Heisenberg’s uncertainty principle on Wi-Fi communication (wireless internet). Heisenberg’s uncertainty principle is usually described as a limit on knowledge of a particle’s position and speed: The better you know its position, the worse you know its speed. However, it is a general principle with many consequences. The most common in particle physics is that the shorter a particle’s lifetime, the worse you know its mass. Both of these formulations are far removed from everyday life, though.

In everyday life, the wave nature of most particles is too small to see. The biggest exception is radio and light, which are wave-like in daily life and only particle-like (photons) in the quantum realm. In radio terminology, Heisenberg’s uncertainty principle is called the bandwidth theorem, and it states that the rate at which information is carried over a radio band is proportional to the width of that band. Bandwidth is the reason that radio stations with nearly the same central frequency can sometimes be heard simultaneously: Each is broadcasting over a range of frequencies, and those ranges overlap. If you try to send shorter pulses of data at a higher rate, the range of frequencies broadens.

Although this theorem was developed in the context of Morse code over telegraph systems, it applies just as well to computer data over Wi-Fi networks. A typical Wi-Fi network transmits 54 million bits per second, or 18.5 nanoseconds per bit (zero or one). Through the bandwidth theorem, this implies a frequency spread of about 25 MHz, but the whole Wi-Fi radio dial is only 72 MHz across. In practice, only three bands can be distinguished, so only three different networks can fill the same airwaves at the same time. As the bit rate of Wi-Fi gets faster, the bandwidth gets broader, crowding the radio dial even more.

Mathematically, the Heisenberg uncertainty principle is just a special case of the bandwidth theorem, and we can see this relationship by comparing units. The lifetime of a particle can be measured in nanoseconds, just like the time for a computer to emit a zero or a one. A particle’s mass, which is a form of energy, can be expressed as a frequency (for example, 1 GeV is a quarter of a trillion trillion Hz). Uncertainty in mass is therefore a frequency spread, which is to say, bandwidth.

Although it’s fundamentally the same thing, the numerical scale is staggering. A computer network comprising decaying Z bosons could emit 75 million petabytes per second, and its bandwidth would be 600 trillion GHz wide.

Oct 23, 2014: Unparticle physics

Fractals like the one above have a property known as scale invariance. Some exotic forms of matter may also be scale-invariant.

The first property of matter that was known to be quantized was not a surprising one like spin — it was mass. That is, mass only comes in multiples of a specific value: The mass of five electrons is 5 times 511 keV. A collection of electrons cannot have 4.9 or 5.1 times this number — it must be exactly 4 or exactly 6, and this is a quantum mechanical effect.

We don’t usually think of mass quantization as quantum mechanical because it isn’t weird. We sometimes imagine electrons as tiny balls, all alike, each with a mass of 511 keV. While this mental image could make sense of the quantization, it isn’t correct since other experiments show that an electron is an amorphous wave or cloud. Individual electrons cannot be distinguished. They all melt together, and yet the mass of a blob of electron-stuff is always a whole number.

The quantization of mass comes from a wave equation — physicists assume that electron-stuff obeys this equation, and when they solve the equation, it has only solutions with mass in integer multiples of 511 keV. Since this agrees with what we know, it is probably the right equation for electrons. However, there might be other forms of matter that obey different laws.

One alternative would be to obey a symmetry principle known as scale invariance. Scale invariance is a property of fractals, like the one shown above, in which the same drawing is repeated within itself at smaller and smaller scales. For matter, scale invariance is the property that the energy, momentum and mass of a blob of matter can be scaled up equally. Normal particles like electrons are not scale-invariant because the energy can be scaled by an arbitrary factor, but the mass is rigidly quantized.

It is theoretically possible that another type of matter, dubbed “unparticles,” could satisfy scale invariance. In a particle detector, unparticles would look like particles with random masses. One unparticle decay might have many times the apparent mass of the next — the distribution would be broad.

Another feature of unparticles is that they don’t interact strongly with the familiar Standard Model particles, but they interact more strongly at higher energies. Therefore, they would not have been produced in low-energy experiments, but could be discovered in high-energy experiments.

Physicists searched for unparticles using the 7- and 8-TeV collisions produced by the LHC in 2011-2012, and they found nothing. This tightens limits, reducing the possible parameters that the theory can have, but it does not completely rule it out. Next spring, the LHC is scheduled to start up with an energy of 13 TeV, which would provide a chance to test the theory more thoroughly. Perhaps the next particle to be discovered is not a particle at all.

Sep 25, 2014: What is the holographic principle?

This is not what scientists mean when they ask if the universe is holographic.

Sometimes, news reports of scientific findings are oversimplified, exaggerated or just wrong. The most extreme disparity that I have seen is between the holographic principle of theoretical physics and how it is usually presented in the news. Most of the news articles on this topic include a statement about “living in a hologram” or a simulated universe, like The Matrix, which isn’t what the physicists mean at all.

The physicists are referring to a surprising relationship between gravity and quantum mechanics. Gravity is known to be a consequence of curved space-time — for example, the Earth curves nearby space and time, which bends the trajectories of tossed objects downward. However, gravity is not well understood on extremely small scales, such as the distance between subatomic particles. The quantum mechanical interactions of the other three forces are understood on small scales, but not for strong fields.

The surprise is that there is a correspondence between the mathematical theory of gravity (specifically, for space-time with an anti-de Sitter shape) and the mathematical theory of quantum fields (specifically, for field distributions that have a conformal symmetry). A correspondence means that although the theories have different interpretations, equations in one theory can be translated to the other theory by a dictionary that relates terms in one with terms in the other. This relationship is incredibly useful because hard problems in gravitational physics often translate to easy problems in quantum field theory and vice-versa, so physicists can now solve more problems.

What does this have to do with holograms? The word “holographic” is used because this dictionary translates three-dimensional gravity problems into two-dimensional field theory problems. Similarly, a hologram produces a three-dimensional image from a two-dimensional film or glass plate. The three-dimensional information in the hologram (for instance, what an object looks like from all angles) must be flattened into the two-dimensional film that is used to project it. The dictionary that relates gravity to field theory must also flatten three-dimensional information about the gravity problem into the two dimensions of the field theory problem. Thus, the phrase “holographic principle” is meant metaphorically.

This correspondence is not certain or completely understood, so experiments like Fermilab’s Holometer are looking for evidence of quantum mechanical features in space-time — specifically, fluctuations in the distance between two mirrors. Also, it’s not just a clever trick to solve more physics problems with a wider set of tools: It could explain why the information content of a black hole is related to its surface area (two-dimensional) rather than its volume (three-dimensional).

There’s nothing wrong with wondering, philosophically, whether the world as we know it is a Matrix-style simulation. It’s just a different question from the one physicists are trying to address when they investigate the holographic principle.

Aug 29, 2014: Invisibility squared

The former presence of a cat on the patio can be inferred from where the rain didn't land. Similarly, sterile neutrinos may be inferred from their effects on normal neutrinos, which themselves are barely visible.

What does it mean for something to be invisible? If it does not reflect light with the right wavelengths, it is not visible to humans, though it might be detected by a specialized instrument. Neutral particles, such as the neutrons in an atom, do not interact with photons of any wavelength (unless the wavelength is small enough to resolve individual charged quarks within the neutron). Thus, they are invisible to nearly every instrument that uses electromagnetic radiation to see.

However, neutrons are easy to detect in other ways. They interact through the strong and weak nuclear forces, and neutron detectors take advantage of these interactions to “see” them. Neutrinos, on the other hand, are still more invisible, since they have no constituent quarks and interact only through the weak force. Billions of neutrinos pass through every square centimeter per second, but only a handful of these per day are detectable in a room-sized instrument.

Now suppose there were another kind of neutrino that did not interact with the weak force. Physicists would call such a particle a sterile neutrino if it existed. How could it be detected? If something can’t be detected, does it even make sense to talk about it? Could there be a whole world of other particles, filling the same space we do, that can never be detected because they don’t interact with anything that interacts with our eyeballs?

In principle, anything that has mass or energy can be detected because it interacts gravitationally. That is, if there were a sterile neutrino planet right next to the Earth, then it would change the way that satellites orbit: This is our gravitational detector. However, a small mass, such as an individual particle, would deflect orbits so little that it could not be detected in practice.

Although sterile neutrinos would have no effect on ordinary matter, they could be detected through what they do to other neutrinos. Neutrinos of different types mix quantum mechanically. That is, muon neutrinos created by a muon beam can become electron neutrinos and tau neutrinos when they are detected. If there were a fourth, sterile, type of neutrino, then the visible neutrinos would also partly transition to sterile neutrinos in flight and change the fractions of the three visible types of neutrinos in the detector.

In the mid-1990s, an experiment called LSND saw what looked like a sterile neutrino signal, so MiniBooNE, an experiment at Fermilab, studied the effect in more detail. As the MiniBooNE scientists investigated, the story got weirder: the numbers of visible neutrinos didn’t add up, but at different energies than expected. No simple explanation makes sense of the data, but it is possible that a sterile neutrino might. A future experiment, MicroBooNE, will study this phenomenon with higher sensitivity. It would be impressive if the key to new physics is an invisible particle, glimpsed only through its effect on nearly invisible particles!

Aug 1, 2014: Baryon acoustic oscillations

Distribution of galaxies observed by SDSS: Each dot is a galaxy (image source). More distant, older galaxies are on the top of the image, with closer, more recent galaxies on the bottom, showing the development of structure over time, from smooth to textured.

In Eve’s Diary, a short story by Mark Twain, Eve writes, “This majestic new world is marvelously near to being perfect, notwithstanding the shortness of the time, but there are too many stars in some places and not enough in others.” If you can get a good view of the sky, far from city lights, you’ll see that stars are grouped in clumps: random but not uniformly random.

Part of this is due to gravity. Stars that are close to one another gravitationally attract and tend to form tight clusters with gaps between the clusters. The same is true of whole galaxies — nearby galaxies tend to amalgamate into superclusters.

The other part of the explanation has to do with the distribution of matter in the early universe. The early universe was nearly uniform, but not perfectly so. Tiny quantum fluctuations, stretched to cosmic proportions by the expansion of space, provided the initial seeds that helped matter start coalescing into galaxies, stars, planets and us.

This phenomenon is known to astronomers as baryon acoustic oscillations (BAO). Baryons are particles of ordinary matter (as opposed to dark matter) — “baryon” is a general term for particles such as protons and neutrons, which provide most of the mass of normal atoms. The “acoustic oscillations” part refers to the fact that fluctuations in the early universe were actually sound waves. Before the universe was cool enough to transition from a glowing plasma into a transparent gas (the first 380,000 years), the light from the Big Bang exerted pressure on charged particles in the plasma, and the waves of high- and low-pressure were analogous to sound in air.

When matter became transparent, the light passed through it unimpeded, and this light is today visible as cosmic microwave background (CMB). Matter, on the other hand, was suddenly released from outward pressure and became subject only to gravity. The CMB is effectively a snapshot of the lumpiness of the universe when it was 380,000 years old, and the BAO is the same distribution amplified by gravity over the last 13.8 billion years.

Although the CMB and BAO have been separate for most of the history of the universe, the imprint of CMB-scale fluctuations has been discovered in the distribution of galaxies today. The characteristic wavelength between crests of galactic superclusters and troughs of intergalactic voids is about 500 million light-years, which corresponds to 73 octaves below middle C. This is exactly what would be expected from propagating the crests and troughs of the CMB forward by 13.8 billion years.

Jun 27, 2014: Subatomic traction

The trajectories of protons in the LHC are controlled by magnetic fields. An upward-pointing magnetic field (B) applies a force (F) to the right on protons flowing through the beam pipe (into the plane of the picture), and this steers them around the (imperceptible) curve of the ring.

Manipulating small objects, such as the cogs of an old-fashioned watch, is difficult with bulky fingers, so special tools are needed to fit them together into a working system. The emerging field of nanotechnology is concerned with manipulating objects the size of atoms and making molecular machines. Protons, electrons and other subatomic particles are hundreds of thousands of times smaller than an atom, so even the techniques of nanotechnology cannot help. How do you grab and move a proton?

Perhaps surprisingly, all you need are simple electric and magnetic fields, such as what might be covered in a first-year physics class. There are two basic interactions: Electric fields accelerate positively charged particles in the direction that the electric field points, and magnetic fields accelerate them at right angles to the magnetic field and the particle’s original direction of motion.

The latter case is illustrated in the photo of an LHC section above. Positively charged protons travel through the beam pipe, and magnets around the beam produce an upward-pointing magnetic field. The direction that is perpendicular to both the protons’ trajectories and the upward-pointing field is to the right. By always turning the protons toward the center of the ring, they stay within the beam pipe as it curves around its 17-mile circumference.

The trick is building a strong enough magnet to keep 7-TeV protons within the ring — the LHC magnets need to produce 8.3 teslas of field strength, which is 130,000 times stronger than the Earth’s field. This is accomplished by making the coils of wire in the electromagnet out of superconducting wire. The bulk of an LHC magnet is for cryogenics to keep the wires at a low enough temperature to superconduct (hold a current with zero resistance).

Although a magnetic field can bend the path of a stream of protons, it cannot increase their speeds. This is because the magnetic force is always perpendicular to the direction of the protons’ motions. To accelerate protons up to 7 TeV, one needs a force pointing in the direction of their motion, which can only be accomplished by an electric field.

Strong, steady electric fields are hard to build, since they tend to discharge by emitting an electric spark. Strong oscillating fields — also known as radio waves — are easier, since they switch directions before a spark has a chance to develop. Unfortunately for an accelerator, this means that the electric field is pointing in the wrong direction half of the time. The solution is to replace a continuous beam with a staccato beam of short pulses known as bunches, and coordinate the bunches to enter the electric field only at those moments when it is pointing in the right direction. Needless to say, the timing is tricky.

The basic physics of a proton accelerator is straightforward enough to be within reach of a first-year physics student, but building an accelerator for energy or intensity frontier physics pushes the limits of modern technology. A minute to learn, a lifetime to master.

May 30, 2014: What is a jet?

In particle collisions like the one shown above, it is common for debris to be grouped into clumps known as jets rather than uniformly distributed in a circle. This event has about 10 jets, but most have only two or three. Image courtesy of ATLAS

In these articles, I often get stuck when I need to describe jets. Jets are complicated yet ubiquitous in particle physics — they’re hard to avoid and hard to explain. In this article, I intend to give jets the space they deserve.

Whenever a single quark or gluon flies off on its own, it pulls new particles out of the vacuum and becomes a cloud of particles, flying in roughly the same direction. This is a jet. If physicists want to know how many quarks or gluons were emitted by an interaction, they have to disentangle the (sometimes overlapping) jets.

This phenomenon is unlike any other in particle physics. There are other processes that create particle-antiparticle pairs, but free quarks spontaneously generate more quarks until they are all are bound, either in pairs or triples. Isolated quarks cannot exist because the force between them doesn’t fall off to zero with distance. Gravity, electric forces and the weak force all become negligible as objects separate, but the strong force, which governs the behavior of quarks, increases as quarks separate, leveling off to a constant — approximately 14 tons.

When a quark is far enough from its neighbors, the energy becomes large enough to become the mass of a new quark-antiquark pair. (Distance times force is energy, and energy can be converted into mass.) This is why an isolated quark doesn’t stay isolated: It costs less energy to create more quarks than to be alone. For a highly energetic quark or gluon flying away from a collision, this process happens several times, resulting in a jet.

Although jets are interesting on their own, they’re present in any interaction that produces quarks or gluons. Often, they act as a smokescreen, hiding information about the primary interaction. For instance, more than half of Higgs bosons decay to a b quark and a b antiquark, but they appear in a particle detector as dozens of particles in two rough bundles. It’s hard to tell exactly which particles came from each quark and which are from other debris. This ambiguity is a source of uncertainty in the energy of the original b quarks — so much so that the Higgs mass peak has never been observed in this decay mode, despite it being the most frequent.

To combat the uncertainties due to jets, physicists have developed sophisticated jet finding algorithms. These algorithms are related to clustering, a machine learning technique that lets computers discover patterns on their own, but jet finders are more highly specialized. The latest generation of algorithms peers inside the jet and identifies individual particles (particle flow algorithms) and even jets within jets (jet substructure algorithms). It is now possible to do precise experiments with jets, despite their apparent messiness.

Mar 7, 2014: Certainty about quantum uncertainty

Uncertainty in quantum mechanics is not a fudge factor. Its internal structure yields complex patterns of high and low probability that would not arise from simple measurement error. Seen here are the probability distributions of an electron in an atom.

This is the last article in a series about quantum mechanics. Previously, I talked about how quantities can be multivalued yet restricted to whole numbers, like a light switch that is both on and off; how quantum processes can include acausal influences, like a time traveler who gets his time machine by going back and giving it to himself; and how so-called particle waves are neither waves nor particles. These are such dubious claims that I was tempted to crowd the exposition with descriptions of experiments, but instead of confusing the issue, I left the “how we know” for this article.

There are several objections one could make to my presentation. Though I said that a quantum light switch is both on and off, measurements will find it either on or off, not both. The time loops of quantum processes are similarly hidden behind a veil of indeterminacy. It might seem like all of these quantum effects are just speculations about what’s happening inside the noise of an uncertain measurement, but there’s more to it than that.

The key ingredient is an effect known as interference. The probability of finding a multivalued quantity as one value rather than another is the square of a more fundamental description called the wavefunction. Squaring a number hides information: 25 is the square of 5, but it is also the square of −5. Since we can only measure probabilities, we can’t determine the sign of a wavefunction, but if two wavefunctions overlap, or “interfere,” we can discover a difference in their signs. For example, if one has magnitude 5 and the other has magnitude 3, the square of 5 + 3 (or −5 + −3) is 64, but the square of −5 + 3 (or 5 + −3) is 4. When you introduce a second wavefunction, the resulting probability can sometimes decrease.

Probabilities, on the other hand, only increase when you combine them. My chances of winning the lottery would be almost doubled if I had twice as many tickets. Most experiments that distinguish quantum multivaluedness from mere uncertainty exploit this distinction. Wavefunctions describing a particle’s spread in position have alternating peaks and troughs of high and low probability, whereas measurement error in the same circumstances would yield a smeared-out blob.

Physicists were so uncomfortable with the idea of acausal influence that they considered countless alternatives to quantum mechanics, cleverly accounting for the apparent acausality with complicated mechanisms. In 1964, John Bell used an interference effect to pose a numerical test that distinguishes quantum mechanics from all causal mechanisms based on a few weak assumptions. This test was experimentally performed in 1981 by Aspect and Grangier and is repeated often under different circumstances with weaker sets of assumptions. The results have always favored the quantum explanation.

Studying quantum mechanics is like a conversation with an alien race. Our prior experience and even brain evolution have not prepared us for this conversation, but if we can stretch our minds around what the data are telling us, we’ll never see the world the same way again.

Feb 21, 2014: Mixed metaphors

When people first encounter an idea, they often frame it as a combination of previously understood ideas, like the sea-horse on the left. The reality (right) may be a different thing altogether.

This is the third in a four-part series on quantum mechanics. Previously, I discussed the discrete yet multivalued nature of quantum properties and the fact that quantum cause-and-effect need not be forward in time. Another oddity of quantum particles is that they are sometimes waves, not particles, though this is an altogether different kind of paradox.

While physicists were discovering quantum mechanics, they had already been debating whether matter and light are made of hard, indivisible particles or fluid waves. Early guesses by Democritus and Isaac Newton favored particles, but by the turn of the 20th century, there was strong evidence that light is a wave. These conceptual categories, “particle” and “wave,” are based on experiences with everyday things like pebbles and water, generalized by mathematical abstraction. It is amazing how broadly applicable these simple ideas are to a wide range of natural phenomena, but both fail to describe what matter does at its smallest scales. Quarks, electrons and the rest are neither waves nor particles. They are another thing entirely.

The particle-like aspect of quantum objects is that they may entirely interact when they collide or they may entirely miss each other. This is what little rocks do when they’re thrown at each other: They hit or they miss. It would be very strange for a wave to work this way. Imagine making a splash in Virginia Beach, watching the wave spread out across the Atlantic Ocean, and then seeing the full splash hit someone in Cape Town, Lisbon, Plymouth, Reykjavik or some other single place along the African or European coast. And yet light, which has clear wave-like properties, does entirely hit or entirely miss when individual photons collide.

The wave-like aspect of quantum objects is that they tend to spread out. Moreover, they can overlap each other and sometimes cancel each other out, like carefully timed waves. It’s difficult enough imagining two pebbles occupying the same space at the same time, stranger still to imagine them becoming zero pebbles in the place where they overlap. Electrons engage in this kind of behavior.

Though often framed as a paradox, these properties are not inconsistent with one another, just never seen together at macroscopic scales. Our human notions of particle and wave are both inadequate for the microscopic world. The best metaphor that I think captures the behavior of quantum objects is the splash that propagates across the Atlantic and pops up with full force in some single (random) place. Unlike the time paradoxes, this aspect of quantum mechanics does not push us to the limits of what is conceivable — we just have to approach it with the willingness to make new metaphors.

Next week, I’ll present uncertainty in quantum mechanics and the experiments that tell us that the universe we live in is a quantum universe.

Feb 7, 2014: By his bootstraps

"By His Bootstraps" (1941) was a science fiction story in which a man acquired a time machine from a future version of himself. This is an example of an acausal loop because the time machine need never be invented.

In my last Physics in a Nutshell, I started a series about quantum mechanics by addressing its first strange feature: the fact that quantities can be multivalued yet restricted to whole numbers, like a light switch that is both on and off but never halfway on.

The second weird thing about quantum mechanics is that it takes as much liberty with time and causality as is logically possible. Time travel, as it is usually presented in fiction, is full of logical paradoxes. Suppose you go back in time and prevent yourself from inventing a time machine. Without a time machine, you can’t go back to make the change, and an infinite regress ensues. But there is another way of changing history that isn’t impossible, merely contrived: Suppose you go back and teach yourself how to invent a time machine, retroactively making the trip possible. Heinlein’s novella “By His Bootstraps” worked this way, and in a sense, so did Sophocles’ “Oedipus Rex.”

If quantum processes are taken literally as sequences, they resemble constructive time loops. The simplest example is the mutual repulsion of charged particles (the reason hair stands up on a dry day). Two charged particles repel each other because one emits a photon, recoiling from the photon’s momentum, and the other catches it, recoiling the other way. However, they never miss — the pitcher doesn’t throw the ball unless the catcher catches it. If viewed at relativistic speeds (charged particles in an accelerator, for instance), the catcher’s catch can even precede the pitcher’s throw, with the photon traveling backward between them. The same process, viewed by two different observers, happens in a different time order.

More complex examples demonstrate this more conclusively. The lesson physicists have drawn is that a quantum process is not a sequence of independent steps, but an undivided cloth that entirely happens or entirely does not happen. Oedipus would not have married his mother if he were not trying to avoid the prophecy that he would do so, and the prophecy would not have been uttered if he did not do so. The whole process can happen without inconsistency, and it can also not happen without inconsistency.

On a human scale, closed time loops would seem to imply a lack of free will, but the randomness of quantum events prevents us from drawing simple philosophical conclusions. Even though experiments on coupled processes have been scaled up such that measurements on one side of a workbench predict outcomes on the other, the sequence of messages is strictly random and cannot be influenced by the experimenter. Thus, we can’t use this to communicate with or change the past. Quantum processes are as acausal as is logically possible, and no more.

In the next article, I will talk about waves, particles and the strange fact that matter at microscopic scales appears to be both.

Jan 24, 2014: "Nobody understands quantum mechanics"

Intuitively, we expect physical quantities such as energy to be continuous and single-valued, like a dimmer switch. At very small scales, however, they are both discrete and multivalued, like a light switch that can be on and off at the same time.

Of all the scientific theories that have broken out into public consciousness, none have ranged as far as quantum mechanics. This subject is sometimes presented as an erudite abstraction, as a smokescreen of uncertainty, as an almost mystical philosophy or as evidence that physicists have lost their minds. It’s rarely said that quantum mechanics makes sense.

There is good reason for that. Quantum mechanics is as hard to believe as anything can be while being demonstrably true. Feynman’s famous quote, “I think I can safely say that nobody understands quantum mechanics,” is sometimes taken out of context as suggesting that if you think you get it, you don’t. This defeatist attitude is unnecessary. Quantum mechanics is bizarre, but it can be understood.

The rules of quantum mechanics are logical, yet unfamiliar. For example, we expect a physical quantity like the position of a particle to be a single number, something that could be measured by a ruler. It is here and not there. That number may vary continuously as the particle moves, and it may be imprecisely known if we have not measured it well, but we intuitively expect it to be a specific number at a specific time.

What physicists have learned is that the position of a particle is not a single number: It is multivalued. The particle is here and there in a way that can be quantified, called the wavefunction. We imagine the wavefunction as a blob filling space, describing the degree to which the particle is in each place: thicker here, thinner there. It can be measured and charted, but our brains don’t like it because we evolved to manipulate the macroscopic world, everything larger than a splinter and smaller than a mammoth. Studying quantum mechanics forces us beyond our comfort zone, to apprehend something truly alien and shed our macrocentrism.

When I first learned about quantum mechanics, I was bothered by the crispness of quantum properties almost as much as their fuzziness. Not only is the energy of a particle multivalued, but each of those values is a whole number, never a fraction. It is as though the sliding dimmer light of our intuition has been replaced by an on-off switch with no middle value, but one that can be 30 percent on and 70 percent off, or any other ratio. Quantities have surprisingly little freedom in what values they can take, but surprisingly much freedom in how many they can take at once.

This is the first in a four-part series on quantum mechanics. In the next article, I will present the time paradoxes, followed by wave-particle duality and an overview of how we know what we know.

Dec 20, 2013: Edgar Allan Poe and the beginning of time

Author Edgar Allan Poe, best known for his macabre poems and detective stories, proposed a solution to a paradox about the dark night sky.

If the universe is infinite and uniformly filled with stars, then any line of sight, when we look up into the sky, should eventually hit some distant star. If so, then the night sky would be as bright as the face of the sun, rather than dark. How can this be?

This paradox vexed 19th-century astronomers, but today the puzzle is solved. The universe is simply not old enough for light from such distant stars to reach our eyes. Wait a billion years, and another billion light-years of galaxies will help to fill the gaps.

It’s not widely known that this solution was first proposed by Edgar Allan Poe, an author of horror and detective stories. In an essay called “Eureka: a Prose Poem” (1848), he speculates that stars are so distant that, for some, “no ray from it has yet been able to reach us at all,” and he reminds the reader that light travels at a finite speed. He assumed implicitly that “the universe of stars” has a finite age, and together these are the three ingredients of the present-day explanation.

Poe’s essay was more spiritual than scientific, his assumptions about the distances to most stars could not be tested by measurements of the day, and his implicit assumption that the universe of stars had a beginning was motivated by religious faith. This point about the beginning of time has always fascinated and frustrated humanity. Whether one starts by believing in a moment of creation or a universe that has always existed, one finds oneself asking either, “What happened before the beginning?” or “How did this uncreated world come to be?” It’s hard to be comfortable with either scenario.

In the early 20th century, the discovery that the universe is expanding came as a shock because many scientists at the time expected it to be without beginning and largely unchanging. Using general relativity, which relates the rate of spatial expansion to the matter and energy that fill space, the current best measurements point back to an absolute beginning of time 13.8 billion years ago. It seems as though science has vindicated all of Edgar Allan Poe’s beliefs.

However, extrapolating all the way down to a point of zero size, infinite temperature and infinite density goes beyond our present state of knowledge. Physicists are still learning how matter behaves at very high temperatures and energies, and the behavior of matter affects the expansion rate of space. The high energies currently being studied at the LHC, for instance, correspond to a temperature of quadrillions of degrees, which was the temperature of the universe a trillionth of a second after the naive Time Zero. If new phenomena appear at higher energies, then that first picosecond of history may need to be rewritten.

This distinction between the physics of the very early universe and the metaphysics of creation is something that I feel is important because it has led to misunderstandings. Physicists and journalists use the phrase “the big bang” in different ways: To physicists, it means the early expansion history and development of the universe as we know it, but in the popular press, it is often taken to mean creation itself. Using the physicist’s definition, it’s not wrong to ask, “What happened before the big bang?” In fact, this is an active area of scientific research.

Nov 15, 2013: The well-balanced Higgs

While possible, this rock formation is not likely to arise in nature.

In conversations among physicists, sometimes you’ll hear someone say, “This theory fits all the observed data, but it’s too fine-tuned.” When I first heard that, it struck me as unscientific because any hypothesis that does not contradict experimental measurement is, in principle, a possibility. “Fine tuning” refers to extreme cancellations in a proposed explanation for something. For instance, a river might be lukewarm because it is the confluence of a boiling geyser and an ice slurry, but this leaves unanswered the question of why there was exactly enough boiling water to balance the ice. Near-exact cancellations are possible, but unlikely.

While we cannot reject possible theories just because they sound unlikely, a finely tuned theory is probably incomplete and should be investigated further. As another example, consider a car that breaks down in every possible way at the same time: The head gasket blows, the engine seizes up, and windshield wiper fluid squirts everywhere. One could say that each of these components has a 10-year lifespan and that it is a coincidence that they all broke at the same instant. However, there might be a deeper explanation that links them — perhaps the head gasket caused a coolant leak that caused the engine to overheat and seize, which boiled the windshield wiper fluid and made it spray. The first explanation isn’t exactly wrong, but it is missing an important insight. In the same way, a finely tuned theory like the Standard Model of particle physics isn’t exactly wrong, but it is probably incomplete.

The Standard Model is finely tuned in several ways, but the most significant is the fact that it does not explain why gravity is so much weaker than the other forces. Here’s a more technical statement of the problem: Why is the Higgs mass (now known to be 125 GeV) about 100 quintillion times less than the characteristic energy scale of quantum gravity? When physicists tried to predict the Higgs mass mathematically, they found that the largest terms in the equation are due to ultra-high energy effects, the regime of quantum gravity. Since we know very little about quantum gravity, the equation could not be solved and the Higgs mass could not be predicted. However, it is highly suspicious that some combination of these unknown yet ultra-high energy terms result in something that is known to be 100 quintillion times smaller.

Many potential explanations have been proposed, but nothing is yet proven. One long-time favorite is supersymmetry, in which normal particles and supersymmetric particles contribute to the Higgs mass with opposite sign, resulting in a near-perfect cancellation naturally. Much like the mystery of the car breakdown, the coincidence could be explained by revealing an underlying connection, if only we can discover what that connection is.

Oct 18, 2013: Seeing the world with neutrino eyes

A simulation of what the Earth would look like if we could see only neutrinos. The Earth is transparent because neutrinos pass through it easily, and the spots on the surface are nuclear reactors. (Data source: Atomic Energy Agency. View a version with continental outlines.)

On Feb. 23, 1987, neutrino detectors in Ohio, Japan and Russia observed a burst of neutrinos. This type of experiment usually only sees neutrinos produced in the sun and the far more energetic neutrinos produced in the Earth’s atmosphere. These neutrinos trickle in at about 10 per day. In 13 seconds, however, the detectors saw 24 neutrinos. Hours later, astronomers witnessed the brightest supernova seen since the invention of telescopes. When the core of a star known as GSC 09162-00821 collapsed, 99 percent of its energy was radiated as neutrinos; the remaining 1 percent became a bright flash of light hours later.

Neutrinos are barely detectable particles produced in weak-force interactions, much as photons are particles of light produced in electromagnetic interactions. Unlike photons, neutrinos are so weakly interacting that they could pass through light-years of lead without much attenuation. If we could see the neutrinos, the universe would look quite different. We would be able to look directly at the core of the sun, where the nuclear reactions take place, rather than its relatively cool surface. The spinning Earth would look like the animation above, with a diffuse glow from natural radioactive elements in the Earth’s crust and bright spots emanating from nuclear power plants, easily visible through the planet. If we could also selectively see neutrinos of different energies, we could focus on neutrinos from particle accelerators, which are typically much higher in energy than solar and supernova neutrinos and more consistent than the sparkle of neutrinos produced in the atmosphere by cosmic rays. I sometimes wonder if that would be the most conspicuous evidence of human civilization to faraway observers: high-energy neutrinos such as those from NuMI at Fermilab, revolving every 24 hours like a lighthouse beam.

However, just as there are extragalactic sources of ultra-high-energy cosmic rays, there may be ultra-high-energy neutrinos coming from active galactic nuclei, gamma-ray bursts and starburst galaxies, and they might also be formed when cosmic rays collide with photons. They may even come from decays of dark matter, if dark matter is not stable. Until last year, none of these ultra-high-energy neutrinos had been observed because so few neutrinos interact in a detectable way per cubic foot of ordinary matter. IceCube, an enormous detector that uses a billion tons of Antarctic ice as its detection medium, observed two ultra-high-energy neutrinos last year — each with about 100 times as much energy as an LHC collision. They may be the first extragalactic neutrinos seen since Supernova 1987A. If the signal is real and points back to a small region of the sky, we could be looking at a cosmic accelerator with neutrino eyes.

Sep 13, 2013: Dark energy, chunky or smooth?

One of the primary questions about dark energy is whether it is the same everywhere ("smooth") or varies in density from place to place ("chunky").

The discovery of dark energy was the most surprising scientific breakthrough in my lifetime. Many physicists consider it the most baffling of nature’s mysteries, and still little is known about it.

To say that it caused the textbooks to be rewritten is literally true. When I first studied cosmology, every textbook had a section on how the fate of the universe depends on the amount of matter it contains. As the universe expands, the gravitational force of all matter pulls against this expansion, slowing it down. If there is enough matter, the expansion can be reversed, pulling everything together into a big crunch. If there is too little, the universe will merely slow down as it flies apart. Dark energy is the discovery that the universe is not slowing down at all, but speeding up.

That’s pretty much all we know: There’s a force that more than counteracts gravity, blowing the universe apart at an ever-faster rate. Even the word “dark energy” is an empty label, since we don’t really know whether it’s a form of energy or not. It might not even be a thing. For instance, it may be that gravity repels, rather than attracts, on cosmic scales. There’s even a term in Einstein’s theory of gravity, called the cosmological constant, that might represent this large-scale repulsion.

But then again, dark energy might be a substance. This substance, often called quintessence, would have unusual properties like negative pressure. Substances can be distributed from place to place, so an observation of clumpy dark energy—more in some places than others—would teach us that it is a thing, rather than a law.

One way to search for clumps in dark energy is to map the sky with extraordinary precision. The gravitational effects of dark energy (and dark matter) leave imprints in the distributions of galaxies and the rate at which they form. These influences can be measured statistically in a sufficiently detailed map. The next generation of sky-surveying telescopes, the Dark Energy Survey, just began its five-year mission last week.

The nature of dark energy is also relevant for predicting the fate of the universe. If dark energy is a cosmological constant, a property of space, then the force it applies will get stronger as space expands—a runaway effect known as the big rip. If it is a substance like quintessence, then there are many possibilities, depending on the exact nature of the substance. Apart from just wanting to understand the world around us, learning about dark energy could have consequences in the long, long, very long term (eons).

Aug 16, 2013: The shape of things that were

This shows the shape of the early universe as seen from outside of space and time. One spatial dimension is shown—the circumference of the bowl—and time is represented by the direction away from the bottom of the bowl. Inflation, nucleosynthesis, the cosmic microwave background and the first stars are not drawn to scale.

In our culture, the phrase “big bang theory” is often used to mean the idea that the universe was created in one explosive moment (or it’s a TV sitcom or a Styx album). For cosmologists, however, “big bang” means the early expansion of the universe, which might or might not have begun in an instant. The late stages of this process are better understood than the beginning: It ended with a sky full of stars, but at the beginning, even the laws of physics are unknown. Is a point of infinitesimal size and infinite density even possible? No one knows.

The big bang was no ordinary explosion. Not only did matter fly apart, as it does from fireworks, but space itself expanded from a small volume to a large volume. When scientists speak of expanding space, they mean a specific type of space-time curvature. On a curved object, such as a pear, the total length of one dimension varies as a function of the other. Near its stem, a pear’s circumference is small, but this circumference grows, levels out, grows again and shrinks as you go from the top of the pear to the bottom. In the same way, the volume of space grew from early times to late times, if we think of time as a dimension. In fact, the image above shows what this space-time shape would look like if we could see the universe and time from the outside.

Simply due to the shape of the bowl, later times have more elbow room than earlier times. This is why the early universe was so hot and dense. If you go back far enough, the entire universe was as hot as the center of a star. Just as stars fuse heavy elements from lighter ones, the early universe fused helium from hydrogen, and modern measurements of the hydrogen-to-helium ratio (about 4 to 1) agree exactly with the expected temperature (a billion degrees Celsius) and time available for this process (three minutes).

At even earlier times, protons must have formed from quarks, and the Higgs field must have become as asymmetric as it is today. These earlier extrapolations are more uncertain, relying on discoveries about how matter behaves at such high energies (from colliders) and patterns in the distribution of matter (from telescopes). The shape of the earliest moments left its imprint in the cosmic microwave background, the light left over from when the whole universe glowed with heat. Advances in particle physics and cosmology could tell us whether space came from a point, like the bottom of a bowl, or started as a long bee-stinger called inflation (see figure). For all we know, the big bang was not the beginning of the universe, but a transition from some other kind of universe.

Jul 19, 2013: How real is relativity?

Rotating a picture frame mixes horizontal and vertical in much the same way that relativity mixes space and time.

Special relativity is a well-established fact of nature. Although we rarely encounter relativistic effects in everyday life, they are routine in the world of subatomic particles and in the cosmos. Objects traveling close to the speed of light become spatially compressed and experience time at a slower rate. For example, lead nuclei in a stationary brick are roughly spherical, but when these same nuclei accelerate and collide in the LHC, they flatten into pancakes that collide face-on. Particles resulting from the collision, such as kaons, take a longer time to decay than stationary kaons because their internal clocks run slower. These are all measurable effects that have been observed in colliders for decades.

But how real is this stretching of time and squashing of length? Perspective makes faraway objects look small and nearby objects look large, but we do not say that they really are smaller when they’re farther away. This is because the same object can look small to a faraway person and large to a nearby person at the same time. We usually don’t call an effect real unless it is consistent among observers.

The way relativity works is similar to rotation. Suppose you hang a painting on the wall and align it well. The horizon line in the painting is parallel to the baseboard on the wall. Tilt the painting 45 degrees, however, and now the painting’s horizon is half-horizontal and half-vertical. A horizontal ruler would measure a shorter horizon than a ruler aligned with the painting (1.41 times shorter: Try it!). Special relativity is this same phenomenon in time and space, rather than in horizontal and vertical.

To understand relativity, one must first think of time as a dimension. Imagine a flip-book, a stack of cards that shows an animated cartoon when you flip through them. Time is like the depth of the stack—every moment in time is a three-dimensional picture, stacked in some unvisualizable fourth direction. It can even be measured in units of length: 1 nanosecond of time is approximately 1 foot long. A stationary object is like a tower in time, in the same spot on each page of the flip-book. A moving object is like a leaning tower, slightly offset from page to page.

Tilting an object in space-time is much like tilting a painting on the wall. Just as the painting’s horizon line occupies less horizontal space when tilted, a fast-moving nucleus occupies less space in the direction of motion—this is why it flattens like a pancake. And since part of its spatial extent has rotated into the time direction, its temporal extent lengthens—this is why its time slows down. (There is a mathematical difference between rotation and relativity, but the two are very closely related.)

So is relativity real or not? Like perspective, relativistic effects depend on your point of view: To a high-speed nucleus, we look flattened and it stays round. That’s the “relative” part of relativity. But unlike perspective, time dilation can have lasting effects. An astronaut who spends many years cruising at relativistic speeds would be physically younger than her twin when she gets back. Taking a shorter path through time has consequences that don’t depend on point of view.

Jun 21, 2013: Entropy is not disorder

Top-left: a low-entropy painting by Piet Mondrian. Bottom-right: a high-entropy painting by Jackson Pollock.

Entropy is a fundamental concept, spanning chemistry, physics, mathematics and computer science, but it is widely misunderstood. It is often described as “the degree of disorder” of a system, but it has more to do with counting possibilities than messiness.

Entropy is the number of configurations of a system that are consistent with some constraint. In thermodynamics, the study of heat, this constraint is the total heat energy of the system, and the configurations are arrangements of atoms and molecules. For instance, the molecules of water in an insulated thermos are always moving, colliding, changing positions and speeds, but in a way that keeps their total heat energy fixed. The number of molecular configurations is a very large number— so large that it would be an understatement to call it “astronomical.” To make this number manageable, entropy is described by the number of digits in the number of configurations: a million is 6, ten million is 7, a hundred million is 8, and so on.

The association with disorder comes from the fact that we often call systems with many possible configurations “messy” and more constrained systems “clean,” but this need not be the case. The picture above compares the artwork of Piet Mondrian with that of Jackson Pollock. We could say that Pollock’s painting has more entropy, not because we subjectively think it’s messier, but because there are more possible configurations consistent with the artist’s intent. Move a single drop of paint a few inches and it’s still essentially the same painting. A stray drop on Mondrian’s painting would ruin it. In this case, the constraint is the artist’s vision and the entropy is the number of possible ways to realize it. We could call Pollock messy, but we could also call him open-minded.

In computer science, information entropy is measured in bytes, the same unit that quantifies the size of a file on disk. Data in a computer is a pattern of zeros and ones, and the number of possible patterns can be counted just like the number of configurations of molecules in a thermos or paint drops on a canvas. In this context, one wouldn’t think of the size of a data set as its degree of disorder.

When two particles collide, the collision debris has only a tiny amount of thermodynamic entropy, despite how messy it may look on the monitor. Lead ions, consisting of 208 protons and neutrons each, produce debris with more entropy than single-proton collisions, and this entropy is relevant in some studies of lead ion collisions. Information entropy was also important in the search for the Higgs boson—the algorithms used to search for this rare particle were designed to minimize the entropy of mixing between Higgs-like events and non-Higgs events, so that the Higgs would stand out more clearly against the noise.

Although entropy has different meanings in different contexts, it has one profound implication: If all configurations are equally likely, the total entropy can increase but not decrease. The reason is mathematical, having to do with a larger number of possible configurations being more likely than a smaller number. When applied to heat, it is called the Second Law of Thermodynamics, even though it is more a consequence of counting and probabilities than a law of nature.

May 24, 2013: Unity and symmetry

Apart from mass, the electromagnetic photon and the weak Z boson are the same particle—in two manifestations.

In this series of Physics in a Nutshell articles, Don and I have talked about each of the four fundamental forces of nature: electromagnetism, the strong force, the weak force and gravity. However, these four forces are not truly distinct. Many physicists are motivated by the idea that they are really four manifestations of a single principle, yet to be discovered. It’s already clear that two of them, electromagnetism and the weak force, are related by a unifying principle, known as electroweak symmetry.

Perhaps the deepest idea in physics is Noether’s Theorem, which states that symmetries in the laws of physics imply the existence of conserved quantities. For instance, the fact that an isolated experiment performed today would yield the same result as the same experiment performed tomorrow—time translation symmetry—is ultimately responsible for the conservation of energy. Since mass is a form of energy, this fact about the nature of time implies that matter is persistent and may be thought of as a substance.

Similarly, the conservation of electric charge is due to a symmetry. It is called gauge symmetry, and, if you’re familiar with electronics, this is the reason that all voltages are measured relative to an arbitrary ground level. A circuit operating between 0 volts and 5 volts is the same as the circuit between 100 volts and 105 volts. When magnetism is added to the mix, gauge symmetry becomes more complicated, since you can trade electric voltages for magnetic potentials.

In the 1970s and ’80s, physicists discovered that the weak force can also be added to the mix. The gauge symmetry of the weak force is much more complex than that of electromagnetism, but the two are actually factors of a single equation. An experimental consequence of this is that the photons of electromagnetism and the Z bosons of the weak force are really the same particle (or half-and-half mixtures of two fundamental particles, depending on how you want to look at it). A photon can behave like a Z boson and vice-versa, under the right circumstances.

However, the photon is massless and the Z boson is so massive that only a handful of high-energy accelerators have ever created them. If the symmetry were exact, they would both be massless. Since electric and weak charges do exist, it is widely believed that the laws of physics have electroweak symmetry, but something in the environment “breaks” the symmetry.

The Higgs mechanism is one possible explanation. This idea is that we are immersed in a field that interacts with Z bosons, giving them an effective mass. It is thus impossible to do a truly isolated experiment. Since the particle discovered last year seems to be the long-sought Higgs boson, we may finally be able to test this theory.

Perhaps electroweak unification is only the first step. It would be intellectually satisfying if all forces derive from a single principle, but more importantly, that principle would reveal the conservation laws that give rise to matter itself.

Apr 26, 2013: The weak world

Like electromagnetism and the strong force, the weak force transfers momentum by tossing an intermediate boson. However, the act of throwing or catching the boson also transforms the particles.

Of the four fundamental forces, the weak force is the most mysterious. It is the only one with no obvious role in the world we know: The strong force builds protons and nuclei, electromagnetism is responsible for nearly every macroscopic phenomenon, and gravity, though weaker than the rest, is noticeable because of our close proximity to a reasonably large planet.

The only observable phenomenon due to the weak force is the radioactivity of certain substances (not all). I sometimes wonder if this major aspect of nature might have gone unnoticed if Henri Becquerel hadn’t kept his unexposed photographic film and his uranium samples in the same drawer. Early 20th-century physicists wondered why some rocks emit strange rays—it turns out that there’s a new force that transforms particles so that they are no longer bound to the nucleus. Mid-20th-century physicists wondered why this force is so weak—it turns out that its intermediary force carrier, its analog of the photon in electromagnetism, is very massive and therefore rarely produced. Physicists today wonder why the weak force carrier is so massive—it may be that there’s an omnipresent Higgs field binding to it, slowing it down and giving it effective mass. The particulate form of that Higgs field may have been discovered last year, at long last.

The weak force is the most eclectic of the four—it violates most of the conservation rules that the others uphold. The strong force, electromagnetism and gravity all act on antimatter the same way with the same strength as on equivalent samples of ordinary matter; the weak force does not. The same is true of mirror-flipped and time-reversed configurations; the weak force uniquely distinguishes between clockwise and counter-clockwise, between forward and backward.

In fact, interactions through the weak force change the identity of all particles involved. In previous articles, we showed how forces push and pull by exchanging an intermediary. In the case of electromagnetism, two charged particles repel by throwing a photon from one to the other, like a heavy sack thrown between two boats. For the weak force, this is either a charged W boson or a neutral Z boson. When a quark emits a W boson, however, it becomes a new type of quark. Charmed quarks turn into strange quarks, and muons become electrons. In addition to carrying the momentum of the force, the W boson takes some of the strangeness or the charmness out of one quark and into another.

In its unique role as rule-breaker, the weak force may be responsible for the matter-antimatter asymmetry in the universe. Its weakness may be hiding dark matter. The weak force seems to be tied to so many fundamental mysteries, it’s amusing to think that the whole thing might have been overlooked if Victorian physicists hadn’t been so curious about strangely warm rocks.

Mar 29, 2013: Electromagnetism, the simplest force

Much like the game of go, the basic rules of the electromagnetic force are simple, yet they play out in complex ways.

Electromagnetism is the training ground for modern physics, both in its historical development and in classrooms today. It is the simplest of the four forces, compared to the nuclear weak force, nuclear strong force and gravitation, but it gives rise to intricately rich patterns. All of the complex phenomena of everyday life, except for gravity and radioactivity, are due to the workings of electromagnetism. It makes chemical bonds, forming the basis of life, gives structure to solid and liquid matter, and makes lightning and aurora borealis twist across the sky.

And yet the fundamental rules of electromagnetism are startlingly simple: Like charges repel and opposite charges attract. If fundamental forces were board games, electromagnetism would be go, in which the rules can be learned in a few minutes but for which the strategy takes a lifetime to master. The other forces are more like chess, with more complicated fundamental interactions. The only problem with this analogy is that electromagnetism took physicists two centuries to learn and has never been mastered.

Electromagnetism was the first of the forces to be understood in a quantum context, in the late 1940s. The key insight was understanding that like charges repel because one emits a virtual photon and the other catches it, as described in the last Nutshell. Analogous to people in rowboats tossing a sack of flour back and forth, the momentum of the exchanged photons is transferred to the charges, pushing them apart.

So how do opposite charges attract? How can one boatman throw a sack of flour to another and have the boats drift toward each other? As it turns out, the quantum nature of the photon is essential. Photons can carry momentum without moving in a straight line between the throw and the catch: They pop from one location to another randomly. When two opposite charges attract, it is as though the sack of flour pops into existence behind the second boatman, pushing him toward the first. Think about that the next time two socks electrically cling in the laundry.

You may have heard of photons as “particles of light.” That’s true, too. Photons can be found in a virtual state, in which they act as described above, or a real one. Real photons are massless quanta of light, radio, gamma rays and any other frequency of electromagnetic radiation. Virtual photons are sometimes massive, sometimes here, sometimes there, always flitting back and forth between charges to bring them together or push them apart. Moreover, “virtualness” or “realness” is a matter of degree—visible light only approximates a massless train of waves.

The one aspect of electromagnetism that makes it seem simple (in comparison) is the fact that photons themselves do not attract or repel each other; they only affect charged particles. This feature separates electromagnetism from all the other forces. Strong force gluons are a sticky mess of gluons attracting gluons by emitting gluons, weak force dynamics are complicated by self-couplings, and gravity can attract itself in the form of a black hole. But the surprising thing is that the two nuclear forces look a lot like electromagnetism—they follow the same paradigm of virtual particles being tossed back and forth. They differ only in the details of how the pieces interact as they move across the board.

Mar 1, 2013: Cross section

A beam of particles is like a shower of arrows— the probability of any one hitting the target depends on their cross-sectional area and the space between them.

Sometimes, everyday words are co-opted by scientists and used as technical terms. One of these is the word “berry.” Talking to a botanist friend of mine, I learned that tomatoes are berries, but strawberries are not—the scientific meaning of a berry has more to do with the reproductive structures of the plant than the way it tastes. The term “cross section” is a berry of particle physics—its technical meaning is very different from the common usage.

In everyday speech, “cross section” refers to a slice of an object. A particle physicist might use the word this way, but more often it is used to mean the probability that two particles will collide and react a certain way. For instance, when CMS scientists measure the proton-proton to top-antitop cross section, they are counting how many top-antitop pairs were created when a given number of protons were fired at each other.

How did particle physicists come to use “cross section” in such a strange way? It’s a long story. In the early days of particle physics, particles were thought to be tiny indestructible balls. When marbles or billiard balls are rolled at each other, the probability that they will collide is proportional to the size of the balls, unless they are precisely aimed. Subatomic particles are so small that aiming individual particles at each other is out of the question—the best anyone can do is to shoot a lot of them in the same general area. The collision probability for a cloud of projectiles is simply the ratio of area covered by them to the total area of the cloud. When Xerxes darkened the sky with arrows at the Battle of Thermopylae, the probability of getting hit by an arrow was very high.

Early collision experiments were intended to measure the size of particles from their collision rate. Rutherford’s experiment, which collided alpha particles and gold nuclei in 1911, revealed that nuclei are much smaller than previously supposed. But soon, disparities arose: Neutrons are more likely to collide with certain nuclei when they are moving slowly than when they are fast. It is as though the neutrons change the area of their cross-section mid-flight. Particles like neutrons are actually quantum clouds that pass through each other or interact with an energy-dependent probability—the likelihood of collision has little to do with a solid, cross-sectional area. Even though hard spheres is the wrong mental image, the term cross section stuck, and it’s common for a physicist to say, “this cross section depends on energy” when it would be nonsensical to imagine the size of the particle actually changing.

But why use “cross section” when alternatives like “probability” and “reaction rate” exist? Cross section is independent of the intensity and focus of the particle beams, so cross section numbers measured at one accelerator can be directly compared with numbers measured at another, regardless of how powerful the accelerators are. Arrows are arrows, no matter how many of them are fired into the sky.

Jan 18, 2013: The atom splashers

In some Civil War battles, the shooting was dense enough and prolonged enough for bullets to collide. The atoms of the metal bullets redistributed themselves as a liquid, much like the quarks and gluons of heavy ion collisions. Image source: brotherswar.com

In most particle physics experiments, physicists attempt to concentrate as much energy as possible into a point of space. This allows the formation of new, exotic particles like Higgs bosons that reveal the basic workings of the universe. Other collider experiments have a different goal: to spread the energy among enough particles to make a continuous medium, a droplet of fluid millions of times hotter than the center of the sun.

The latter studies, often referred to as heavy-ion physics, require collisions of large nuclei, such as gold or lead, to produce amorphous splashes instead of point-like collisions. Lead ions, for instance, contain 208 protons and neutrons. When two lead ions hit each other squarely head-on in the LHC, many of the 416 protons and neutrons are involved in the collision, unlike the single-proton-on-single-proton collisions used to search for the Higgs boson. With so many collisions in such close proximity, the debris of the nuclei mingles and re-collides with itself like atoms in a liquid. Instead of just splitting in half, the nuclei literally melt.

This is a bit like what happens when two bullets collide in mid-air. Immediately after impact, the atoms in the bullets have enough energy to temporarily melt. Similarly, the quarks and gluons in the colliding lead nuclei spread and mingle as a droplet of fluid before evaporating into thousands of semi-stable particles.

This short-lived state of quark matter is unlike any other known to science. All other liquids, gases, gels and plasmas are governed by forces that weaken with distance. Water, for instance, is made of molecules that electromagnetically attract each other and repel oil. Clouds of interstellar dust are gathered by gravity and congealed by electromagnetism. In contrast, the quarks and gluons loosed by a heavy-ion collision are attracted to one another by the nuclear strong force, which does not weaken with distance. As two quarks start to separate from each other, new pairs of quarks and antiquarks join the mix with an attraction of their own.

This difference in the strong force law leads to surprising effects in the droplet as a whole. Experiments indicate that it is dense and strongly interacting, but with zero or almost no viscosity. As a result, it splashes through itself without friction. This differs from colliding bullets, which behave like clay because of the viscosity of liquid metal.

Quark matter is the stuff the big bang was made of. In the first microseconds of the universe, all matter was a freely flowing quark-gluon soup, which later evaporated into the protons and neutrons that we know today. Yet it is far from understood. It can only be produced in collisions and it is so short-lived that its properties have to be inferred from patterns in the particles that spatter away. Heavy-ion collisions in the LHC and RHIC at Brookhaven will tell us more about the origin of our universe.

Dec 7, 2012: A mixed bag of neutrinos

If we represent electron neutrinos (νe), muon neutrinos (νμ), and tau neutrinos (ντ) by pure red, pure green and pure blue, respectively, the three neutrino mass states (ν1, ν2 and ν3) would be fuchsia, lime and periwinkle, a mixture of the primary colors.

Quantum mechanics is an everyday fact of life for particle physicists. Most particles are short-lived and decay before they can be directly observed, and the weirdest quantum shenanigans are perpetrated by systems that can’t be observed. Neutrinos behave quantum mechanically, and even though they are not short-lived, neutrinos are so difficult to observe that they can maintain a mixed quantum state even while traveling large distances.

A quantum state—the current value of some aspect of a quantum system—is like a light switch. It can be off or on, but never in between. Before discovering quantum mechanics, physicists expected a system’s properties to be like dimmer switches that smoothly slide between off and on, but they’re not. In addition, quantum properties can also take on multiple values simultaneously—off and on, as illustrated by the figure below. Each possible state contributes some amount, such as 30 percent off and 70 percent on. This mixing allows neutrinos to spontaneously change from one type to another, a phenomenon known as neutrino oscillations.

Neutrinos are categorized by how they they are produced: Electron neutrinos are produced along with electrons, muon neutrinos along with muons and tau neutrinos along with tau particles. They can also be categorized by mass: Neutrino 1, neutrino 2 and neutrino 3 all weigh different amounts. A neutrino cannot be categorized by production mode and by mass at the same time, however. A pure production state is a quantum mixture of all three mass states and a mass state is a quantum mixture of all three production states. A muon neutrino, produced along with a muon in the LBNE beamline for instance, would travel for hundreds of miles from Fermilab as a mixture of quantum states and may be observed in a detector in South Dakota as an electron neutrino.

This interplay between production states and mass states is what allows neutrinos to oscillate, or change from one production state to another and back again. The probability of each state actually wavers back and forth, the way the pitch of a gong wobbles as it resounds. The gong wobbles with a beat frequency because of a competition between two nearly matched resonances, which are analogous to the differences between neutrino masses in neutrino oscillations.

Physicists use production state transitions to learn more about the neutrino mass states. Neutrino masses are so small that it’s hard to measure them any other way, though they have cosmic significance. There are enough neutrinos in the universe that their feeble masses could affect how galaxies form and how space expands. The exact way that production states and mass states mix might even be associated with the disappearance of antimatter in the early stages of the Big Bang.

Nov 2, 2012: Spin!

Spin is angular momentum, quantized into discrete units such as +1 and -1.

In the original Karate Kid movie, the kid who wants to learn karate is frustrated by his teacher’s insistence that he spend his time painting fences and waxing cars. Only later is it revealed that the student had learned the fundamental moves without realizing it, blocking punches with “wax on, wax off.” Similarly, the deepest mysteries of physics are taught in Physics 101, but they’re hard to recognize in everyday objects like bicycle wheels and spinning tops.

One of these deep principles is the conservation of angular momentum. Angular momentum is the amount of rotation an object has, taking into account its mass and size, and it is curiously constant. A spinning figure skater has a constant angular momentum as she contracts her arms and twirls faster because a fast-spinning small object has as much angular momentum as a slow-spinning large object. A cloud of interstellar dust, stately drifting in a slow inward spiral, can eventually collapse into a pulsar the size of a city block, feverishly revolving around its axis a thousand times per second. The angular momentum stays the same.

Fundamental particles are infinitesimal points of zero size, so it’s hard to imagine what it even means for them to rotate. We can, however, measure their angular momentum, and it seems to always come in discrete multiples of 1/2 like -1/2, 0, +1/2, +1, and nothing in between. This is a quantum mechanical effect, and each particle has an intrinsic or built-in angular momentum called spin. Particles of matter, such as electrons and quarks, all have spin 1/2 (called fermions), whereas particles of force or energy have integer spins: 0, 1, and 2 (called bosons). Photons, which are particles of light, can have spin -1 or +1, a phenomenon that is known to photographers as circular polarization.

Of all effects in quantum mechanics, the quantization of angular momentum was the hardest for me to accept because, for instance, we could simply put a particle on a record player and rotate it slowly, somewhere between 0 and 1/2, to try to defeat its all-or-nothingness. In fact, experiments like this have been done, not with single particles, but with buckets of quantum mechanical fluids such as liquid helium. This is the same fluid that was piped into the Tevatron to cool its magnets to superconducting temperatures. When liquid helium is rotated, it maintains a constant angular momentum by creating equal-sized vortices spinning the opposite way, in analogy with integer spins. These vortices are separate, discrete “units” of angular momentum, similar to how we think of spin. They even arrange themselves into regular patterns that precisely cancel the externally applied rotation.

Spin has many applications. In 3-D movies, for instance, the left-eye and right-eye images are projected using spin -1 photons and spin +1 photons. The 3-D glasses filter out the appropriate spin for each eye. (It’s a fortunate accident of biology that humans have as many eyes as photons have spin states.) Experimental new microchips use electron spins to store data, an emerging technology known as spintronics. Particle physicists use the spins of known particles to deduce the nature of new ones.

The new Higgs-like particle discovered this summer decays into two photons, and angular momentum must be conserved in that decay. If the new particle decays into opposite-spin photons, demonstrated by my cats in the top figure, then it must have had (+1) + (-1) = 0 spin, consistent with being a Higgs boson. If it decays into same-spin photons, then it must be a spin-2 particle, like a graviton. Scientists are currently struggling to measure that spin because it makes a big difference in how we interpret this discovery.

Oct 5, 2012: The platypus particle

A leptoquark would be a strange amalgam of familiar leptons and quarks, the way that a platypus has features of both mammals and birds. Image from Charles Baker, Animals, Their Nature and Uses (1877)

All of the atoms in our bodies are made of electrons, protons and neutrons, and the protons and neutrons can be further decomposed into quarks. At the bottom level, then, we are made of only two types of particles: electrons and quarks. But what do these labels mean? Why do we even say that electrons and quarks are different from each other?

Since they don’t come with nametags, we have to define particles by how they interact. It is a bit like cataloging wildlife on a new continent — at first, everything is strange, but eventually we see how the species can be grouped into patterns. Some animals quack and waddle, so we call them all ducks, while others are furry and build dams, and we choose to call them beavers. When physicists first explored the subatomic world, they noticed that there are two basic types of nuclear interactions, one much stronger than the other. To this day, they are called the weak force and the strong force because they never got better names.

Particles of matter were similarly grouped into two classes, leptons and hadrons, which come from Greek words for small and big. Curiously, leptons seem to be completely unaffected by the strong force while hadrons are utterly dominated by it. Although leptons, such as the familiar electron, can turn into other leptons – muons, taus and neutrinos – the total number of leptons in the universe appears to be constant (counting a matter lepton as plus one and an antimatter lepton as minus one). The same is independently true of quarks, the fundamental building block of hadrons. There may be a deep reason for this similarity, but it isn’t yet known.

The resemblance between leptons and quarks is even more striking when we arrange them by the ways they interact with the weak force. Many physicists suspect that the similarity between leptons and hadrons is not an accident, and that they might be connected somehow. If so, then there could be a new particle that is a little of both — a “leptoquark.” Such a thing would be as shocking as the discovery of the platypus, a mammal that lays eggs like a duck yet is furry like a beaver.

As the missing link between leptons and quarks, leptoquarks might explain how more matter emerged from the big bang than antimatter. They might also determine our fate, since they would satisfy the accounting that currently keeps electrons and quarks in atoms from annihilating with each other. If there were a bridge between leptons and quarks, then even ordinary matter could spontaneously decay into pure energy. It would only take a billion-trillion-trillion years.

Aug 10, 2012: Breaking supersymmetry

Like supersymmetry, most snowflakes are nearly, but not exactly, symmetric. Image: SnowCrystals.com

At its smallest scale, nature is highly symmetric. There are many different kinds of symmetries among particles — for instance, matter and antimatter are identical except for charge, like an image in a mirror that is the same as its object, but reversed. There are three generations of each type of matter (and antimatter) that are identical but for mass: small, medium and large. Photons of the electromagnetic force and Z bosons of the weak force also differ only in mass. One could say that particle physics is really the study of nature’s symmetries.

It should be no surprise, then, that physicists are seeking new symmetries to understand the universe at a deeper level. Supersymmetry is a hypothetical symmetry between matter and forces— if correct, matter and forces would be two sides of the same coin. The list of reasons that supersymmetry is attractive is a long one. In particular, it could explain why the Higgs boson is light enough to be observed at the Large Hadron Collider (LHC), which seems no longer to be a purely theoretical issue.

Exact supersymmetry would be very simple: Each particle of matter would have a corresponding particle of force with the same mass, such as quarks and “squarks” (supersymmetric quarks). However, no such thing has been observed. If it exists, supersymmetry must be inexact— quarks and squarks would have different masses, much like the inexact symmetry between massless photons and massive Z bosons. Supersymmetry’s attractiveness is undimmed by this imperfection, however. In fact, broken supersymmetry could explain how our complex world unifies at exceedingly high energies, where the little mass differences between quarks and squarks disappear.

But like a shattered mirror, broken supersymmetry is a messy business. There are over a hundred free parameters— knobs to turn— when describing the way that supersymmetry might be inexact. Each of them predicts different particles and different decay patterns. Searches for broken supersymmetry, whether at the LHC, in high-intensity experiments or in space, must be very general or have limited applicability.

To control the chaos, physicists often focus their attention on simple models, in which many of the free parameters are chosen to be equal to each other. The simplest is known as the constrained Minimally Supersymmetric Standard Model, or cMSSM. In this model, there are only five free parameters. When physicists seek supersymmetry, they often cast an eye toward the cMSSM for guidance. At the very least, it provides a concrete way to track progress. Each new experiment rules out new combinations of the cMSSM parameter values, which can be drawn on a plot like bites taken out of a cookie.

For 30 years, increasingly sensitive experiments have eaten away at the possibilities, shown in Figure 2. So far, the LHC has done the most damage: If the mass of cMSSM squarks were small enough to be within the LHC’s energy range, it would have produced them by now. As new data push the boundary further, the possible parameter combinations that remain aren’t attractive. It may be that we live in a cMSSM world with squark masses that are just outside of reach, but that would make the idea of supersymmetry less relevant— it would no longer solve the problems that motivated it in the first place, such as explaining why the mass of the Higgs boson is so low. Deciding when to pronounce the model dead is subjective, but there is a great deal of discussion about it, and the prognosis is not good.

What if the cMSSM is truly and finally ruled out? That would be a major milestone, a disappointment perhaps, but the broader subject of supersymmetry would not be closed. Physicists are already rolling up their sleeves and considering more general models of supersymmetry breaking, as well as alternatives to supersymmetry that solve some of the same problems. However this turns out, the picture is not as simple as it might have been.