Discover
/
Article

A TeV Linear Collider

SEP 01, 2004
An accelerator capable of colliding electrons with positrons at energies approaching a trillion electron volts tops the particle physicists’ wish list. It would have to be 30 kilometers long.

DOI: 10.1063/1.1809092

Ian Hinchliffe
Marco Battaglia

High-energy physics is entering a new era in which decisive experiments should yield deeper understanding of the basic building blocks of matter, their interactions, and their relation to the cosmos. As the Large Hadron Collider (LHC) nears completion at CERN, the worldwide particle-physics community argues that another large accelerator facility is needed. The community’s consensus is that this new facility should be a 30-km-long pair of linear accelerators that will fire electrons and positrons at each other at collision energies up to about 1 TeV. This article discusses what’s being done to realize that linear collider and what we can expect it to accomplish.

Experimental results and theoretical developments over the past 40 years have already provided a coherent theory of particle physics that has been verified with great precision. This so-called standard model ascribes three of the fundamental interactions to the exchange of force-mediating spin-1 particles, the so-called gauge bosons. The strong interactions, mediated by massless gluons, bind the quarks—the basic building blocks of matter—into protons, other elementary hadrons, and nuclei. The electromagnetic interactions, responsible for the binding of atoms, molecules, and condensed matter, are mediated by the massless photon. The weak interactions, mediated by the W± and Z0 bosons, describe nuclear beta decay and the interactions of neutrinos. Because the masses of the W and Z are so high (respectively 80.4 and 91.2 GeV, almost 100 times the proton mass), the weak interactions are effective over only a very short range. 1

The great triumph of the standard model was the correct prediction of the W and Z masses, before there was an accelerator of sufficient energy to produce them. The W and Z were discovered in 1983 at CERN. A decade later, the top quark was found at Fermilab. Those discoveries, both at proton accelerators, marked the successful end of a phase; all the matter and force particles predicted by the standard model had finally been observed.

Throughout the 1990s, there were also two high-energy electron–positron colliders in operation: the Stanford Linear Collider (SLC) at SLAC and the Large Electron–Positron (LEP) collider at CERN. These e+e machines allowed the properties of the neutral Z, in particular, to be studied in great detail. Toward the end of the decade, LEP reached collision energies up to 209 GeV, still the highest ever achieved in e+e collisions. At that energy, W bosons could be produced in W+W pairs, providing detailed information on the properties of the W.

Electrons and positrons, like all the other leptons but unlike protons, antiprotons, and the other hadrons, are thought to be pointlike particles. The ability to make such particles collide at well-defined and tunable energies gave machines like LEP and SLC considerable advantages over hadron colliders for precision measurement.

The Higgs mechanism

All these successes raise an even more fundamental set of questions. We don’t yet fully understand the mechanism that’s responsible for the nonvanishing masses of the W and Z bosons and the fundamental fermions: the quarks and leptons. In the standard model, those masses arise from interaction with a single, new, and as yet undiscovered, neutral particle: the Higgs boson (H). By this so-called Higgs mechanism, each particle acquires a mass proportional to the strength of its coupling to the Higgs boson.

The Higgs couplings are also believed to be responsible for the violation of symmetry under CP , the joint operation of charge conjugation (replacing all particles by their antiparticles) and spatial inversion. CP violation was first observed in 1964, in the decays of K mesons. Presently, it’s being extensively studied in the decays of the much heavier B mesons.

Many experiments have been searching for the Higgs boson. The absence of a significant Higgs signal at LEP energies means that its mass, M H, exceeds 114 GeV. The standard model dictates that the Higgs boson also manifests itself indirectly, via quantum corrections to various measurable quantities such as the masses and decay properties of the W and Z. A global standard-model fit to such well-measured parameters 2 yields a best estimate of 117 GeV for the Higgs mass and an upper limit of about 250 GeV (see figure 1 and Physics Today, August 2004, page 26 ).

PTO.v57.i9.49_1.f1.jpg

Figure 1. A global standard-model fit to well-measured low-energy parameters restricts the masses of the top quark and Higgs boson to lie within the red ellipse, with a confidence level of 68%. The green band shows the measured value of M t and the yellow region of M H is already excluded by direct searches for the Higgs at LEP. The present best standard-model guess for M H from such fits is 117 GeV, with an upper limit of about 250 GeV.

View larger

The next step in the exploration of higher energies will occur at the LHC, a proton–proton collider under construction in the same tunnel that housed LEP until 2001. The LHC’s beam energy will be 7 TeV, seven times that of the protons and antiprotons currently countercirculating in Fermilab’s Tevatron collider. Operation of the LHC is expected to start in 2007.

The LHC was designed to find the Higgs boson, whatever its mass, or exclude its existence. If the Higgs is discovered, the LHC will measure its mass with great accuracy and determine some of its couplings. But the accuracy of the measurements and their ability to test the Higgs’s presumed role will be limited by complexities unavoidable in a proton–proton collider: Experimental backgrounds are very large and theoretical prediction of the Higgs production rate as a function of M H is compromised by hadronic complications.

In the standard model, specifying the Higgs boson’s mass suffices to predict all of its properties. In particular, branching ratios for different Higgs decay modes are completely specified for any value of M H (see the box on page 51). The Higgs would always decay into a particle–antiparticle pair. An e+e linear collider of sufficient energy to produce the Higgs will make possible an exhaustive set of measurements of these decays and thus reveal whether the Higgs mechanism presumed by the standard model is, or is not, correct.

The observation of the Higgs boson will have exhausted the search for all the elementary particles predicted by the standard model. However, there are compelling reasons for believing that there is new physics beyond the standard model. It cannot be the final theory of elementary particles. The standard model ignores gravity. But if an elementary particle had a mass or energy of 1019 GeV—the so-called Planck scale—its gravitational interactions would be comparable to its weak interactions.

If one extrapolates the strong, weak, and electromagnetic couplings of elementary particles to collision energies far beyond what accelerators can do, the coupling strengths seem to become comparable at energies about three orders of magnitude below the Planck scale. That’s tantalizing evidence of the possibility of a “grand unification” of those three forces that goes beyond the ad hoc unification of the standard model.

One would expect, however, that quantum corrections to the W, Z, and Higgs masses should drive their values up to the Planck mass. That makes the enormous gap between their actual masses and the Planck scale difficult to understand. New physics beyond the standard model, with new intermediate mass scales and quantum corrections, could solve this so-called hierarchy problem. If the scale of the new physics were near around 1 TeV, the hierarchy problem would disappear.

There are several classes of candidate theories for such new physics. One class, of which supersymmetric theories are examples, has extra symmetries that introduce cancellations which lessen the impact of the quantum corrections that otherwise drive up the boson masses. Another class of models introduces new mass scales at which the theory changes character. There might, for example, be additional spatial dimensions beyond the obvious three. Even though the extra dimensions might be too small to have been noticed, they could invalidate the naive extrapolation of the standard model to the Planck scale. All these models have distinctive and definite predictions that would provide experimental signatures at an e+e collider of sufficient energy.

Astrophysical observations have, in recent years, made a strong case that about 25% of the mass–energy of the universe is “dark” nonbaryonic matter of unknown character. (See Physics Today, April 2003, page 21 .) Although the standard model offers no viable candidate for dark matter, it is tantalizing to note that extensions of the theory that introduce new symmetries can predict the existence of dark-matter candidates—weakly interacting stable particles produced in the early cosmos and still surviving. One of the prime aims of the next generation of accelerators is to unveil signals of such new physics and understand how it relates to cosmology.

Supersymmetry

Arguably the best-motivated model of new physics beyond the standard model is supersymmetry. By introducing a supersymmetric partner for each standard-model particle species, the theory achieves the desired cancellation in the quantum corrections responsible for the hierarchy problem. The new, and as yet unseen, supersymmetric particles would have larger masses than their standard-model partners. And they would have opposite statistics: Fermions would have boson partners and vice versa. The heavier partners of the spin-1/2 leptons, for example, would be scalar (spin-0) bosons called sleptons.

Supersymmetry also ensures that the standard-model couplings evolve with increasing energy in such a way that they do indeed become equal at very high energy. Furthermore, the theory predicts the existence of a stable particle with about the right mass and couplings to account for the cosmological dark-matter density and its role in the evolution of large-scale structures in the universe.

If supersymmetry is realized in Nature, there are many new particles waiting to be discovered and measured. Heavier than the sleptons would be the “squarks,” the scalar superpartners of the spin-1/2 quarks. The LHC, being a hadron collider, would very likely discover strongly interacting superpartners: squarks and gluinos (the spin-1/2 partners of the spinless gluons) weighing less than 3 TeV. But fewer sleptons would be directly produced, because they have only electroweak interactions. At the LHC, hadronic background would make them harder to detect.

While the standard model predicts only a single Higgs particle, supersymmetric models predict several, with different masses. If there are multiple Higgs particles, the LHC should find at least one of them and measure some of its properties. An e+e collider of sufficient energy could measure all of them and complete the picture. The accurate determination of the mass of the lightest supersymmetric stable particle and those of the sleptons will be crucial for understanding whether supersymmetry is indeed responsible for dark matter.

The rate at which events of a particular process (for example, Higgs production) occur at an e+e collider is determined by the production cross section for that process and the collider’s luminosity—that is, the machine’s event rate per unit cross section. A collider’s luminosity depends on the intensity and focusing of its colliding beams. Production cross sections of interest in a high-energy e+e collider fall rapidly with increasing beam energy E b. They are typically of order 20 fb / E 2 b, when E b is given in TeV. One femtobarn (fb) is 10−39 cm2.

Thus higher-energy colliders require higher luminosities. To obtain 10 000 events of interest in a year of operation at a collider with E b = 0.5 TeV would require a luminosity of about 1034 cm−2sec−1. That’s a hundred times higher than the best luminosity LEP ever achieved with 0.1 TeV beams. Obtaining such high luminosity is the biggest technical challenge facing those who are seeking to design the next-generation linear collider.

The ability to produce the Higgs boson by the process e + + e - Z 0 + H makes an e+e collider of sufficient energy an ideal laboratory for studying the Higgs in detail. The momentum of the neutral Z can be accurately measured from its disintegration into charged lepton pairs, and the collider’s beam energy is precisely known. Therefore, one can infer the mass of the Higgs recoiling against the Z without having to measure its decay products. Even if the Higgs decay products were not detected, the existence of the Higgs would be revealed by a distinctive recoil-mass peak like the one simulated in figure 2.

PTO.v57.i9.49_1.f2.jpg

Figure 2. Finding and weighing the Higgs boson at a linear collider. (a) The simulated detector display, projected on a plane normal to the beams colliding at the center, shows how the reaction e+e → HZ might look. The central tracking system is surrounded by calorimeters. The Z boson decay to µ+µ (the two penetrating tracks going down right) is measured much more precisely than the Higgs boson (H) decay to a pair of heavy b quarks that manifest themselves as two jets of final decay products going up left. The magnified inset shows that some tracks in the b-decay sequence do not originate at the collision point. (b) Even without measuring jets in such muon-pair events, one can get a clear Higgs signal and mass determination simply by plotting the mass that must be recoiling against the muon pair. In this simulated recoil-mass plot, which assumes that the Higgs mass is 120 GeV, the red region indicates true e+e → HZ events and the black data points include background processes, which clearly do not obscure the Higgs peak.

View larger

The strengths of the couplings of the various quarks, leptons, and gauge bosons to the Higgs particle can be accurately determined from the relative rates of Higgs decays to different particle–antiparticle pairs. Thus one can test a fundamental assertion of the Higgs mechanism—namely, that the coupling strengths are simply proportional to the mass of the decay particle, irrespective of its other properties. The collider can also examine the self-interactions of the Higgs boson. Such an ensemble of measurements would either definitively establish the standard-model role of the single Higgs boson as the agent responsible for all the fundamental masses or, by finding discrepancies, point us toward an expanded theory with more than one Higgs-like particle.

In the case of the LHC, the composite nature of the proton means that not all of the 14 TeV energy of a pp collision is available to produce new particles. But for new particle masses below 1 TeV, one can generally expect large production rates. In most supersymmetric models, sleptons are expected to have masses of a few hundred GeV. For a standard-model Higgs, an e+e collision energy of 150 GeV above the Higgs mass optimizes the cross section for Z + H production. Many detailed studies conclude that the LHC and an e+e linear collider of appropriate energy would play important complementary roles.

Decay of the Higgs Boson

The plot shows the standard-model prediction for the branching ratios of Higgs boson decays to various particle–antiparticle pairs as a function of the still-unknown Higgs mass M H. The so-called Higgs mechanism, a central tenet of the theory, requires that the strength of the Higgs boson’s coupling to any particular particle be simply proportional to that particle’s mass, irrespective of its other properties. Therefore, the Higgs decay rate to the heavy b and c quarks and the heavy τ lepton are predicted to be large.

The W boson is so massive (80.4 GeV) that, for M H less than 161 GeV, one of the decay W bosons has to be virtual. The even heavier top quark (t) couples to the decaying Higgs only as a virtual particle. But the Higgs coupling to virtual t and W pairs, as indicated by the red dots in the Feynman diagrams below, explains the predicted nonvanishing decay rates of the Higgs to massless gluons (g) and photons (γ).

The bands in the branching-ratio plot indicate theoretical uncertainties, and the simulated data points show the size of the errors expected in a linear collider experiment. If M H exceeds 200 GeV, decays to W+W and Z0Z0 are predicted to dominate Higgs decay.

PTO.v57.i9.49_1.d1.jpg

Colliding linacs

Much beyond the 100-GeV beam energies achieved at the LEP ring in its last years, a circular e+e storage ring becomes totally impractical. The synchrotron radiation loss of the countercirculating electrons and positrons becomes prohibitive. (Protons, being far heavier, suffer much less synchrotron loss in storage rings.) Therefore a linear collider—two linacs lined up face-to-face, firing beams at a common focus from opposite directions—is the only possibility.

The next-generation e+e collider is expected to cover a vast, diversified physics program. 3 At the low-energy end of its range, it must be able to run at a collision energy (2E b) of 91 GeV, the Z mass, and there provide much larger event samples than those already obtained by LEP and SLC. These extensive low-energy data would sharpen the standard-model prediction of M H, which may or may not agree with what the LHC will already have found. And they would provide important complementary information about new particles discovered at the LHC.

At higher energies, the linear collider must measure the mass and couplings of the top quark with much higher precision than is possible at the LHC. And at the top of its energy range, the collider must search for new phenomena at energies comparable to the effective quark–quark collision energies explored at the LHC. Because an individual quark carries only a small fraction of the momentum of the proton in which it lives, this last requirement means that the e+e collider will have to reach collision energies up to about 1 TeV.

This range of collision energies—a full order of magnitude, from the Z mass to 1 TeV—is unprecedented. The collider’s luminosity must, of course, increase with beam energy to compensate for the falling production cross sections. Accelerating the beams to the highest energies within a manageable collider length—approximately 30 km—and with an acceptable power consumption (a few hundred MW) requires RF cavities with high accelerating gradients and good efficiency.

Achieving adequate luminosity will require that the colliding beams be kept very narrow and finally focused down to spots of nanometer size at the collision point. That’s a formidable challenge. For almost two decades, three alternative accelerating technologies—copper RF cavities, superconducting RF cavities, and driving-beam acceleration—have been under development. Design optimization and testing is being done in large facilities at several leading accelerator laboratories around the world. The goal is to demonstrate existence proofs for the basic building blocks of the linacs, the damping rings, and the final focus systems (see figure 3).

PTO.v57.i9.49_1.f3.jpg

Figure 3. Schematic of the electron–positron collision region of the next-generation linear collider. The e+ and e beams accelerated to high energy from opposite directions in two 15-km-long underground tunnels collide head-on at a point surrounded by the 15-m-diameter detector facility. The tunnels are filled with klystrons (SLAC prototype shown) and the accelerating cavities (superconducting TESLA design shown) to which they feed RF power. Positrons are created by a fraction of the fully accelerated electron beam hitting a target near the collision region. The low-energy positrons from the target are sent to a small synchrotron damping ring above ground (damping ring at KEK test facility shown) to reduce their phase-space spread before acceleration.

View larger

High-frequency, room-temperature copper accelerating cavities are a natural evolution of the technology successfully applied at the SLC, the only previous linear e+e collider. With some 12 000 cavities operating at 11.4 GHz (the X-band in microwave parlance) to create an accelerating gradient of 50 or 60 MV/m, such a collider could reach a collision energy of 1 TeV in a total length of about 30 km. The “warm copper” X-band approach is the basis of designs jointly developed by SLAC and the KEK laboratory in Japan. 4,5 A proof of principle has recently been established.

The principal alternative to warm copper is superconducting cavities. They can get better power-transfer efficiencies by accelerating the beams with RF pulses of longer duration. When superconducting cavities were introduced at LEP in the 1990s, they achieved accelerating gradients of 6 MV/m. Nowadays, thanks to the R&D program for the TeV linear collider, they do better than 25 MV/m. The TESLA project, 6 whose R&D and design has been centered at the German Electron Synchrotron Laboratory (DESY) in Hamburg, proposes a linear collider based on 21 000 superconducting niobium cavities operating at 1.3 GHz. TESLA is designed to provide high luminosities and collision energies ranging from 90 to 500 GeV. A later upgrade might reach 1 TeV.

The TESLA test facility at DESY has already demonstrated the feasibility of producing and operating the cavities in full cryomodules at the requisite gradient for a 500-GeV collider. The German government recently approved the construction of an x-ray light source that uses a 50-GeV electron linac based on TESLA’s superconducting accelerating technology.

Achieving multi-TeV collisions is more problematic and therefore under consideration primarily for the farther future. The Compact Linear Collider (CLIC) project at CERN is aiming for collision energies of 3–5 TeV at very high luminosity. 7 CLIC would use “two-beam” acceleration: A low-energy, high-intensity drive beam of electrons would feed 30-GHz microwave power to the main high-energy beam. In principle, such a scheme could achieve accelerating gradients of order 150 MV/m. CLIC still requires significant R&D to demonstrate its feasibility.

In the long run, it would also be useful to have polarized electron and positron beams. Because the weak interactions do not conserve parity, electrons spinning left-and right-handed behave differently in collisions. For example, the production of W pairs, which represents an obscuring background to some searches for new particles, could be suppressed by making the electron beam predominantly right handed. Polarized electron beams were available at SLC; they proved crucial in analyzing the couplings of the Z boson. Polarized positrons would provide a further tool for analyzing the couplings of the old and new particles, and for reducing systematic experimental uncertainties.

Another useful option would be to make high-energy photon beams by backscattering laser light off the electron or positron beam. With a 500-GeV electron beam, one could get 400-GeV photons. The resulting photon–photon collider could be used to produce and study the Higgs in isolation from any other final-state particles.

Detectors

The ability of the linear collider to answer the deep questions depends not only on large event rates at high energies but also on how well experimenters can decipher what comes out of a collision. The massive detector complex that surrounds the collision point must provide adequate precision in the reconstruction of each outgoing particle. That accuracy must be preserved despite the presence of formidable backgrounds. Although backgrounds are much worse in a hadron collider of comparable energy, they are nonetheless significant in a high-energy e+e machine.

As two examples of what a detector at the linear collider will have to do beyond what’s already been demonstrated, consider the demands on its vertex tracker and its electromagnetic calorimeter.

Vertex detectors are tracking devices of extremely high spatial resolution surrounding the immediate region where the beams collide. Proving that the Higgs is indeed responsible for the masses of the fundamental fermions requires measuring its couplings to each quark and lepton species with high precision. Jets of particles arising from Higgs decay to the heavy bottom (b) and charmed (c) quarks contain particles with lifetimes of order 10−12 seconds. By extrapolating the particle tracks back to their points of origin with great precision, one can exploit these lifetimes to distinguish individual heavy-quark jets from one another and from jets created by gluons or the lighter quarks. The LHC detectors are not designed to identify c quarks, and their efficiency at identifying b quarks is less than one would need at a linear collider. That’s because the e+e collider, with less stringent radiation-hardness requirements on its vertex detector than one needs in a hadron machine, could avail itself of a new generation of silicon pixel sensors that promise a threefold improvement on the best tracking precision of existing e+e vertex detectors.

The linear collider’s detector will have to detect and precisely measure the energies of electrons, muons, and hadrons—both charged and neutral. The hadrons are a particular challenge. Precision must be maintained over the large span of final-state particle energies. At the LHC, the energies of jets are measured by calorimeters in which all the energy of the jet particles is deposited and summed. In the e+e collider, where each collision produces far fewer particles, one can measure the energies of the individual particles that make up each jet. Charged-particle momenta are measured by the curvature of trajectories in the magnetic field of a large-volume tracking chamber; photons are measured from their energy deposition in a high-resolution electromagnetic calorimeter; while neutral hadrons such as neutrons are absorbed and measured in a hadron calorimeter.

To avoid double counting of charged particles already measured in the tracking chamber, particle-by-particle jet reconstruction requires that the calorimeters have spatial resolution better than what’s currently available. Layers of absorbing tungsten interleaved with silicon detecting planes are presently being studied in prototypes for the electromagnetic calorimeter. Their segmented design would provide sufficient localization to match each energy deposit to its initiating particle in the crowded jet (see figure 4).

PTO.v57.i9.49_1.f4.jpg

Figure 4. Simulated jet of dozens of hadrons and leptons constituting a single quark jet from the decay of a W boson created in an 800-GeV e+e collision. 9 This display of the joint output of a cylindrical charged-particle tracking chamber (fine tracks at bottom) and a surrounding electromagnetic calorimeter (broader dots at top) matches tracks of the same particle in the two regions and assigns them the same (random) color. The calorimeter is a highly segmented sequence of tungsten layers, in which charged and neutral particles interact electromagnetically or hadronically, interspersed with silicon layers that detect the interaction products. The energy loss in the tungsten distinguishes different particle species. Muons, for example, are almost impervious to thin metal layers. The calorimeter’s spatial resolution is good enough for reliable matching of localized energy deposits with trajectories in the tracking chamber. The next-generation linear collider will need a calorimeter with similar capabilities.

View larger

Studies carried out over almost a decade in Europe, the US, and Japan have established a general consensus on the detector design. The cylindrical detector would be about 15 m in diameter and length, with a large central tracking chamber followed by the calorimeters inserted inside the superconducting coil that provides the chamber’s solenoidal magnetic field of about 4 tesla. To precisely reconstruct the charged-particle trajectories near their production vertices, a high-resolution vertex tracker will surround the beam vacuum pipe at the collision point.

Much remains to be accomplished. In particular, the tracking detector must have as little material as possible, lest scattering and photon interactions compromise measurement. By contrast to the case of the LHC, which requires a strict preselection of the tiny fraction of collision events to be recorded, the much lower event rate at an e+e collider allows all events to be logged for later offline analysis, thus ensuring sensitivity to all processes, irrespective of experimental signature.

Realizing the project

In the past two years, the next-generation linear collider has made significant strides toward full maturity as a project that’s ready for approval. The state of the competing designs and the required R&D effort was summarized in a February 2003 report by a panel of the International Committee on Future Accelerators (ICFA). 8 Since then, both the superconducting and warm accelerating technologies have demonstrated their viability for a machine starting out with a collision energy of about 500 GeV that could later be upgraded to 1 TeV. The CLIC technology, with its promise of even higher energies in the long run, is regarded as less mature at this stage.

Given its cost and complexity, the linear collider will have to be realized as a worldwide project, with the host country or region bearing a substantial fraction of the cost. The ITER program for a prototype fusion reactor, an undertaking of comparable cost and complexity, is also being planned as an international project (see Physics Today, August 2004, page 28 ). An international linear-collider steering committee was therefore created under the auspices of ICFA in 2002. Among its members are the directors of the major high-energy-physics laboratories. The steering committee has begun moving toward a formal worldwide project proposal and the development of a model of interregional collaboration for construction and operation.

Remote operation of the accelerator and the detector from control centers around the world is being considered. Astronomers already do that, on smaller scales, with satellites and large telescopes. But it would be a first for a facility of such enormous scale and complexity. R&D for the collider has now reached the stage at which the community can make a final decision on its technical feasibility and an informed choice between the superconducting and warm-copper alternatives.

Particle physicists worldwide have already expressed overwhelming support for the e+e linear collider as the next large-scale facility they will need to advance the understanding of nature from the quarks to the cosmos. In its recent 20-year plan, the Office of Science of the US Department of Energy has ranked the linear collider at the top of its priority list for midterm projects. Farther afield, the Organisation for Economic Co-operation and Development, a consortium of 30 industrialized countries, has praised the collider’s potential for future development in scientific research.

By the end of this year, a review committee will have recommended which of the two competing RF technologies should be chosen for a 500-GeV collider, upgradeable to about 1 TeV. Then, following a final phase of development and optimization of all the accelerator components for the chosen technology, a budgeted project proposal can be prepared.

Where will the 30-km collider be built? Surveys of potential sites are under way. Among the many issues that have to be considered are geophysical stability, ease of access, and land acquisition. Then, around the time when the first results from the LHC are expected about three years from now, the project could proceed toward approval, with the aim of having the first collisions by the middle of the next decade.

High-energy physics is at the dawn of a new era of decisive discoveries. The LHC is guaranteed to bring us closer to answering many of the most pressing questions. There are important measurements that the LHC cannot accomplish with the desired accuracy, and some that it cannot do at all. The linear collider will provide the accuracy and complementary measurements that are needed to complete the picture. The synergy of the data provided by the LHC and an e+e collider of comparable energy will be crucial for answering the fundamental questions of particle physics and its overlap with cosmology.

In the recent past, a similar synergy between LEP and SLC, on the one hand, and the big hadron colliders at CERN and Fermilab on the other, was indispensable in bringing us the current understanding of the standard model. If the linear collider and the LHC validate the Higgs mechanism and also reveal new physics that would explain dark matter, it will be a triumph for both particle physics and cosmology. For almost two decades, the linear collider has been the goal of an intense worldwide development effort. The next few years will be crucial for turning it into a successful international facility for fundamental research.

References

  1. 1. For a review, see, for example, W. N. Cottingham, D. A. Greenwood, An Introduction to the Standard Model of Particle Physics, Cambridge U. Press, New York (1998), or http://particleadventure.org/particleadventure/frameless/standard-model.html .

  2. 2. The LEP collaborations et al., http://arXiv.org/abs/hep-ex/0312036 .

  3. 3. H. Murayama, M. E. Peskin, Annu. Rev. Nucl. Part. Sci. 46, 533 (1996). https://doi.org/10.1146/annurev.nucl.46.1.533

  4. 4. T. Abe et al., http://arXiv.org/abs/hep-ex/0106055/0106056/0106057/0106058 . See also http://www-project.slac.stanford.edu/nlc/home.html .

  5. 5. See http://lcdev.kek.jp/RMdraft and http://www-jlc.kek.jp .

  6. 6. See http://tesla.desy.de/new_pages/TDR_CD/start.html/ and http://tesla.desy.de/ .

  7. 7. See http://ps-div.web.cern.ch/ps-div/CLIC/Welcome.html

  8. 8. International Linear Collider Technical Review Committee, Second Report, available at http://www.slac.stanford.edu/xorg/ilc-trc/2002/2002/report/03rephome.htm .

  9. 9. J. C. Brient, H. Videau, in Proc. Snowmass 2001 Summer Study on the Future of Particle Physics, available at http://www.slac.stanford.edu/econf/C010630/proceedings.shtml#10 .

More about the Authors

Ian Hinchliffe and Marco Battaglia are physicists at Lawrence Berkeley National Laboratory. Battaglia is also a professor of physics at the University of California, Berkeley.

Ian Hinchliffe. Lawrence Berkeley National Laboratory, University of California, Berkeley, US .

Marco Battaglia. Lawrence Berkeley National Laboratory, University of California, Berkeley, US .

This Content Appeared In
pt-cover_2004_09.jpeg

Volume 57, Number 9

Related content
/
Article
Professional societies can foster a sense of belonging and offer early-career scientists opportunities to give back to their community.
/
Article
Research exchanges between US and Soviet scientists during the second half of the 20th century may be instructive for navigating today’s debates on scientific collaboration.
/
Article
The Eisenhower administration dismissed the director of the National Bureau of Standards in 1953. Suspecting political interference with the agency’s research, scientists fought back—and won.
/
Article
Alternative undergraduate physics courses expand access to students and address socioeconomic barriers that prevent many of them from entering physics and engineering fields. The courses also help all students develop quantitative skills.
/
Article
Defying the often-perceived incompatibility between the two subjects, some physicists are using poetry to communicate science and to explore the human side of their work.
/
Article
Positron emission tomography’s ability to image the body’s biochemistry, not just its anatomy, makes it a powerful tool for detecting diseases.

Get PT in your inbox

Physics Today - The Week in Physics

The Week in Physics" is likely a reference to the regular updates or summaries of new physics research, such as those found in publications like Physics Today from AIP Publishing or on news aggregators like Phys.org.

Physics Today - Table of Contents
Physics Today - Whitepapers & Webinars
By signing up you agree to allow AIP to send you email newsletters. You further agree to our privacy policy and terms of service.