Random thoughts about random subjects… From science to literature and between manga and watercolours, passing by data science and rugby; including film, physics and fiction, programming, pictures and puns.
This is a reblog go the post in Physics Today, written by Andrew Grant.
The researchers are recognized for their contributions to theoretical cosmology and the study of extrasolar planets.
James Peebles, Michel Mayor, and Didier Queloz will receive the 2019 Nobel Prize in Physics for helping to understand our place in the universe through advances in theoretical cosmology and the detection of extrasolar planets, the Royal Swedish Academy of Sciences announced on Tuesday. Peebles is a theoretical cosmologist at Princeton University who helped predict and then interpret the cosmic microwave background (CMB) and later worked to integrate dark matter and dark energy into the cosmological framework. Mayor and Queloz are observational astronomers at the University of Geneva who in 1995 discovered 51 Pegasi b, the first known exoplanet to orbit a Sunlike star. Peebles will receive half of the 9 million Swedish krona (roughly $900 000) prize; Mayor and Queloz (who also has an appointment at the University of Cambridge) will share the other half.
The contributions of Peebles and of Mayor and Queloz helped jumpstart their respective fields. Over the past few decades, researchers have developed the successful standard model of cosmology, Lambda CDM, though the nature of both dark energy and dark matter remains an open question. Meanwhile, astronomers have used the radial velocity technique employed by Mayor and Queloz, along with the transit method and even direct imaging, to discover and characterize a diverse population of thousands of exoplanets. Data from NASA’s Kepler telescope suggest that the Milky Way harbors more planets than stars.
Connecting past with present
“More than any other person,” writes Caltech theoretical physicist Sean Carroll on Twitter, Peebles “made physical cosmology into a quantitative science.” His contributions began even before Arno Penzias and Robert Wilson’s 20-foot antenna at Bell Labs picked up the unexpected hum of 7.35 cm microwave noise that would come to be known as the CMB. Working as a postdoc with Robert Dicke at Princeton, Peebles predicted in a 1965 paper that the remnant radiation from a hot Big Bang, after eons of propagating through an expanding universe, would have a temperature of about 10 K. In a subsequent paper Peebles connected the temperature of the CMB, measured by Penzias and Wilson at 3.5 K (now known to be 2.7 K), to the density of matter in the early universe and the formation of light elements such as helium.
In 1970 Peebles and graduate student Jer Yu predicted a set of temperature fluctuations imprinted in the CMB due to the propagation of acoustic waves in the hot plasma of the infant universe. Decades later, the Cosmic Background Explorer (COBE), the Wilkinson Microwave Anisotropy Probe (WMAP), and, most recently, the Planck satellite would measure a similar power spectrum in the CMB. “The theoretical framework that he helped create made testable predictions,” says Priyamvada Natarajan, a Yale theoretical astrophysicist. “They still inform a lot of the observational tests of cosmology.”
Peebles also considered the connection between those fluctuations and the large-scale structure of the universe we observe today, as measured through galaxy clusters in sky surveys. “His idea that you can see the initial conditions and dynamics of the universe in the clustering of galaxies transformed what we could do as a community,” says New York University astrophysicist David W. Hogg.
Peebles’s view of the CMB and what it embodies proved especially important in the early 1980s, when cosmologists struggled to reconcile the deduced densities of matter in the infant universe with the large-scale structure that ultimately emerged. In a 1982 paper, Peebles proposed a solution in the form of nonrelativistic dark matter. Long after escaping the dense confines of the infant cosmos, that cold dark matter (CDM) would form the cocoons in which ordinary matter clumped into galaxies and then galaxy clusters. His paper built on the work of Vera Rubin, whose measurements with Kent Ford of the rotation curves of the Andromeda galaxy were critical toward demonstrating that dark matter must be the dominant component of galactic halos, to keep disks of stars and gas from flying apart. Subsequent satellite measurements have revealed that collectively dark matter has about five times the mass of ordinary matter.
By the 1990s it was becoming clear that a model containing just CDM, ordinary matter, and photons couldn’t account for all the observed properties of the universe, notably the value of the Hubble constant. The result is Lambda CDM, the cosmological model that describes the universe with six precisely measured parameters and accounts for the 1998 discovery that the universe’s expansion is accelerating. Peebles was one of the theorists to propose resurrecting Albert Einstein’s once-discarded cosmological constant to describe the newly discovered dark energy, which makes up more than two-thirds of the mass–energy content of the universe.
Ushering in the exoplanet era
To appreciate the contribution of Mayor and Queloz, consider that in 1995 the least massive known object outside the solar system was a star of 0.08 solar masses; Jupiter, for comparison, is about 0.001 M☉. Mayor was part of a team that in 1989 reported the probable detection of an object 11 times as massive as Jupiter that could be classified as either a very large planet or a brown dwarf. Pennsylvania State University astronomer Jason Wright says that other teams amassed preliminary evidence of extrasolar planets, but it was unconvincing and led planetary scientist William Cochran to declare, “Thou shalt not embarrass thyself and thy colleagues by claiming false planets.”
In 1992 Alexander Wolszczan and his colleagues discovered two planets orbiting the pulsar PSR B1257+12 via timing variations in the dead star’s radio beacon. (A third later found around the same pulsar remains the lowest-mass exoplanet yet discovered.) The discovery showed that exoplanets are out there, but the question remained of how common they are around stars like the Sun, where well-placed ones would presumably have the potential to support life.
At the Haute-Provence Observatory in southeastern France, Mayor and his graduate student Queloz conducted a survey of 142 stars using a spectrograph called ELODIE, which they designed to enable the observation of fainter stars than had previously been surveyed. The researchers’ approach, first proposed in 1952 by Otto Struve, was to detect the Doppler shift in the stellar spectrum due to the star’s motion as it is pushed and pulled by an orbiting planet. The expected stellar wobble due to a planet’s tug was on the order of 10 m/s; even now, the best spectrometers have a resolution of about 1000 m/s, Hogg says. Mayor and Queloz needed to be able to pinpoint a shift that accounted for a hundredth, or even a thousandth, of a pixel.
That’s exactly what they did through analysis of the signal from 51 Pegasi, a star located about 50 light-years away in the constellation Pegasus. The Doppler shift was consistent with the motion of a Jupiter-mass planet in a four-day orbit at 0.05 astronomical units, far shorter than the distance between Mercury and the Sun. The discovery of a “hot Jupiter” was surprising but also helpful, as the short period enabled Mayor and Queloz, and competing groups, to easily conduct follow-up observations. The astronomers announced their discovery at a conference in Italy almost exactly 24 years ago, on 6 October 1995, and soon published their result in Nature. Another group promptly confirmed the finding.
“It’s a discovery that has completely changed our view of who we are,” says Yale University astronomer Debra Fischer. “And it came at a time when we thought that maybe there weren’t many planets around other stars.”
However, the astronomy community wasn’t yet convinced by Mayor and Queloz’s claim. Many researchers didn’t think it was possible for such a massive planet to either form so close to the star or migrate inward without getting incinerated. Theorists proposed that the observed stellar wobbles might not be caused by an exoplanet at all, but rather by phenomena such as stellar brightness oscillations. But even the most skeptical came around in 1999, with discoveries of the first multi-exoplanet system by Fischer and colleagues, and of HD 209548 b. That planet was detected via the drop in brightness it caused when it passed in front of its star.
The early planet confirmations convinced observatory directors to build and install spectrographs. They also ultimately helped coax NASA to greenlight the development of a space telescope proposal that had been languishing for decades, a mission called Kepler. That satellite, which was launched in 2009, and instruments such as the Transiting Exoplanet Survey Satellite have detected thousands of planets and planet candidates.
Nearly a quarter century after Mayor and Queloz’s discovery, exoplanet science is a powerhouse endeavor that engages a significant percentage of the astrophysics community. Researchers join the field to study not only the planets but also the stars they orbit, which in turn has led to new insights in stellar astrophysics. By pairing transit measurements, which determine planets’ radii, with radial velocity, which provides masses, researchers have determined that many of the galaxy’s planets don’t resemble those in our solar system. The lack of resemblance challenges theories of planet formation and extends the range of planetary types that theories have to accommodate.
The most tantalizing goal of the field set in motion by Mayor and Queloz is to find planets that resemble Earth and to detect biosignatures. Researchers are already probing the atmospheres of individual worlds using the Hubble Space Telescope and other tools. Next-generation instruments, particularly the James Webb Space Telescope and the Wide Field Infrared Survey Telescope, will aid in that effort.
It was great to have been able to attend a lecture at the new home of the Institute of Physics. I have been a member for almost two decades and I have even served as an officer for one of the interest groups, the Computational Physics Group is you must know.
The event was a talk by Stephen Hilton from the School of Pharmacy, UCL 3D Printing and its Application in Chemistry and Pharmacy. It was a very useful talk covering applications ranging from teaching, cost saving in chemistry labs, personalised medicine and chemistry itself.
As for the building, it was nice to finally see the end result, with a hint of brutalist architecture and some nice details such as the electromagnetic wave diagram in some of the windows, and Orion in the cealing!
Left to right: Arthur Ashkin, Gérard Mourou, and Donna Strickland. Credits: Bell Labs, Alexis Cheziere/CNRS Photothèque, and University of Waterloo
Arthur Ashkin, Gérard Mourou, and Donna Strickland are to be awarded the 2018 Nobel Prize in Physics “for groundbreaking inventions in the field of laser physics,” the Royal Swedish Academy of Sciences announced on Tuesday. Ashkin, formerly of Bell Labs in New Jersey, will receive half the prize of 9 million Swedish krona (roughly $1 million); Mourou, of École Polytechnique in France and the University of Michigan, and Strickland, at the University of Waterloo in Canada, will split the other half.
The Royal Swedish Academy is honoring Ashkin for his invention of optical tweezers to trap and manipulate particles and living cells. In the 1970s and 1980s, he discovered that the radiation pressure in laser beams could be used not only to push small objects but also to confine and manipulate them. Although the initial targets of manipulation were latex beads, Ashkin soon expanded the technique to atoms, viruses, DNA, and other biological specimens.
Mourou and Strickland together developed chirped pulse amplification (CPA), in which a laser pulse is stretched, amplified, and then compressed to increase its power. The ultrafast, high-intensity tabletop lasers that ensued have spurred advances in data storage, materials manufacturing, and the study of femtosecond- and even attosecond-duration phenomena. Citing its mission to recognize inventions that benefit humankind, the academy also highlighted how Mourou and Strickland’s work made possible production of surgical stents and the use of lasers to correct vision.
When she receives her medal in December, Strickland will become the third woman to receive the physics prize out of 209 laureates, and the first since Maria Goeppert Mayer in 1963.
Trapped by light
At 96 years old, Ashkin is the oldest person to receive a Nobel Prize. When he was half that age, in 1970, he was at Bell Labs studying the use of light’s radiation pressure to propel objects. The main challenge in observing the optical force was to avoid heating the sample with the light, which causes a thermal gradient force that is usually orders of magnitude larger than the force due to radiation pressure. Ashkin tackled that problem by shining laser light on a transparent, nonabsorbing system: micron-sized latex spheres immersed in water.
Ashkin’s seminal 1970 paper in Physical Review Letters reports the beads’ acceleration in the direction of the laser beam, as one would expect from the collective nudge of a bundle of photons. But it also describes a second, less intuitive force that is directed toward the axis of the pulse. The force emerges from the refraction of the light as it passes through the curved interface between the low refractive index of the water and the higher-index bead, and it’s strongest at the core of the beam where the light intensity is highest (see Physics Today, November 2010, page 13).
Ashkin continued working on such optical manipulation, and in 1986 he set up an experiment with Bell Labs colleague Steven Chu using lenses to focus light on the beads. Ashkin and Chu expected the beads to move toward the high-intensity center of the beam and jet forward. Instead, the spheres stopped in their tracks. The momentum transfer from the scattered light leaving the sphere had imparted a backward force to counteract the beam’s forward push. “That’s what Nobel Prizes are made of,” says New York University optical physicist David Grier. “An obvious truth hiding in plain sight.”
Building on Ashkin’s work, Chu and others directed their research toward trapping and cooling atoms with lasers. Along with William Phillips and Claude Cohen-Tannoudji, Chu received the 1997 Nobel Prize in Physics for that work (see Physics Today, December 1997, page 17); some people in the field, Ashkin included, felt he should have been recognized by the Nobel committee.
In this optical trapping and imaging apparatus, a laser travels up through an objective, reaching the sample from below. The sample is imaged using light from above, which travels down through the objective and is then relayed to a camera. Credit: David Grier
Unlike some of his colleagues, Ashkin was attracted to the biological potential of his optical tweezers. In the late 1980s, he demonstrated the optical trapping of viruses and living cells, such as bacteria, using a lower-energy IR laser to avoid searing the specimens. Since then, researchers have used optical tweezers to stretch strands of DNA, prod red blood cells, and tie molecular knots. In a widely cited 1993 paper in Nature, researchers optically trapped the protein kinesin, which transports molecular cargo inside eukaryotic cells, and measured the force it applied to a bead to which it was affixed.
And the research continues. Grier and his colleagues recently introduced holographic optical trapping, in which a computer shapes a single laser beam into tens or hundreds of optical traps, each capable of manipulating objects in 3D. They’ve also developed a laser setup—a rudimentary tractor beam—that pulls objects in the opposite direction of beam propagation.
Ultrafast, ultrapowerful pulses
Whereas Ashkin exploited the properties of lasers to manipulate objects, Mourou and Strickland worked on manipulating laser pulses. Within five years after the development of the laser in 1960, researchers had found multiple ways to shorten the duration of pulses, and thus intensify the power, by six orders of magnitude. But by concentrating solely on pulse length, the high intensity (power per unit area) of the generated pulses was becoming impractical to work with. Amplifiers and other optical components were suffering damage, and pulses were propagating erratically due to extreme intensity gradients in the beam. As a result, laser intensity and power barely improved between the mid 1960s and the mid 1980s.
Inspired by a technique developed for microwaves, Mourou, then at the University of Rochester in New York, and his graduate student Strickland set out to amplify a stretched-out—and thus less intense—pulse and then recompress it. In their pivotal experiment, Mourou and Strickland sent a nanojoule pulse through an optical fiber. Due to positive group velocity dispersion within the fiber, the red component of the light propagated faster than the blue. The stretched, lower-energy-density pulse was then amplified and passed through a pair of parallel diffraction gratings, which allowed the blue component to catch up to the red. The reassembled 2 ps pulse had three orders of magnitude more power than the original pulse (see the article by Mourou, Christopher Barty, and Michael Perry, Physics Today, January 1998, page 22). The 1985 paper on the CPA technique, published in Optics Communications, was Strickland’s first publication. According to Google Scholar, it has been cited 4677 times.
The original CPA technique had some flaws—for one, the shape of the reassembled pulse didn’t perfectly match that of the original. Once again, Mourou and Strickland drew inspiration from others’ work on longer-wavelength light. Along with colleagues at Rochester, they built a pulse compressor based on a design for the telecommunications industry. Everything came together when the Rochester researchers and a visiting scientist named Patrick Maine produced a 1 TW pulse—1 J embedded in 1 ps—on a tabletop. The “Maine event,” as the researchers called it, triggered a radical change in the use of short-pulse lasers. With no fear of frying their apparatus, researchers could ditch liquid-dye amplifiers in favor of titanium:sapphire and other solid-state media with superior performance.
Most importantly for research scientists, the CPA technique spurred the proliferation of the tabletop terawatt, or T3, laser. Laser technology that was once viable only for major research institutions could soon be done in university physics labs for hundreds of thousands, rather than many millions, of dollars. Scientists use such lasers, which can now pack petawatts of power, to probe atom ionization, electron–nuclear coupling, and other subfemtosecond processes, among other research. (See Physics Today, January 2018, page 18, and June 2018, page 20.) “I always felt that ultrafast lasers have not received the attention they deserved,” says Ursula Keller, who directs ultrafast-laser-physics research at ETH Zürich. “When you look at the impact, it’s enormous.”
Although Bose-Einstein condensation has been observed in several systems, the limits of the phenomenon need to be pushed further: to faster timescales, higher temperatures, and smaller sizes. The easier creating these condensates gets, the more exciting routes open for new technological applications. New light sources, for example, could be extremely small in size and allow fast information processing.
In experiments by Aalto researchers, the condensed particles were mixtures of light and electrons in motion in gold nanorods arranged into a periodic array. Unlike most previous Bose-Einstein condensates created experimentally, the new condensate does not need to be cooled down to temperatures near absolute zero. Because the particles are mostly light, the condensation could be induced in room temperature.
‘The gold nanoparticle array is easy to create with modern nanofabrication methods. Near the nanorods, light can be focused into tiny volumes, even below the wavelength of light in vacuum. These features offer interesting prospects for fundamental studies and applications of the new condensate,’ says Academy Professor Päivi Törmä.
The main hurdle in acquiring proof of the new kind of condensate is that it comes into being extremely quickly.’According to our theoretical calculations, the condensate forms in only a picosecond,’ says doctoral student Antti Moilanen. ‘How could we ever verify the existence of something that only lasts one trillionth of a second?’
Turning distance into time
A key idea was to initiate the condensation process with a kick so that the particles forming the condensate would start to move.
‘As the condensate takes form, it will emit light throughout the gold nanorod array. By observing the light, we can monitor how the condensation proceeds in time. This is how we can turn distance into time,’ explains staff scientist Tommi Hakala.
The light that the condensate emits is similar to laser light. ‘We can alter the distance between each nanorod to control whether Bose-Einstein condensation or the formation of ordinary laser light occurs. The two are closely related phenomena, and being able to distinguish between them is crucial for fundamental research. They also promise different kinds of technological applications,’ explains Professor Törmä.
Both lasing and Bose-Einstein condensation provide bright beams, but the coherences of the light they offer have different properties. These, in turn, affect the ways the light can be tuned to meet the requirements of a specific application. The new condensate can produce light pulses that are extremely short and may offer faster speeds for information processing and imaging applications. Academy Professor Törmä has already obtained a Proof of Concept grant from the European Research Council to explore such prospects.
1 Tommi K. Hakala, Antti J. Moilanen, Aaro I. Väkeväinen, Rui Guo, Jani-Petri Martikainen, Konstantinos S. Daskalakis, Heikki T. Rekola, Aleksi Julku, Päivi Törmä. Bose–Einstein condensation in a plasmonic lattice. Nature Physics, 2018; DOI: 10.1038/s41567-018-0109-9
New quantum method generates really random numbers
Researchers at the National Institute of Standards and Technology (NIST) have developed a method for generating numbers guaranteed to be random by quantum mechanics. Described in the April 12 issue of Nature, the experimental technique surpasses all previous methods for ensuring the unpredictability of its random numbers and may enhance security and trust in cryptographic systems.
The new NIST method generates digital bits (1s and 0s) with photons, or particles of light, using data generated in an improved version of a landmark 2015 NIST physics experiment. That experiment showed conclusively that what Einstein derided as “spooky action at a distance” is real. In the new work, researchers process the spooky output to certify and quantify the randomness available in the data and generate a string of much more random bits.
Random numbers are used hundreds of billions of times a day to encrypt data in electronic networks. But these numbers are not certifiably random in an absolute sense. That’s because they are generated by software formulas or physical devices whose supposedly random output could be undermined by factors such as predictable sources of noise. Running statistical tests can help, but no statistical test on the output alone can absolutely guarantee that the output was unpredictable, especially if an adversary has tampered with the device.
“It’s hard to guarantee that a given classical source is really unpredictable,” NIST mathematician Peter Bierhorst said. “Our quantum source and protocol is like a fail-safe. We’re sure that no one can predict our numbers.”
“Something like a coin flip may seem random, but its outcome could be predicted if one could see the exact path of the coin as it tumbles. Quantum randomness, on the other hand, is real randomness. We’re very sure we’re seeing quantum randomness because only a quantum system could produce these statistical correlations between our measurement choices and outcomes.”
The new quantum-based method is part of an ongoing effort to enhance NIST’s public randomness beacon, which broadcasts random bits for applications such as secure multiparty computation. The NIST beacon currently relies on commercial sources.
Quantum mechanics provides a superior source of randomness because measurements of some quantum particles (those in a “superposition” of both 0 and 1 at the same time) have fundamentally unpredictable results. Researchers can easily measure a quantum system. But it’s hard to prove that measurements are being made of a quantum system and not a classical system in disguise.
In NIST’s experiment, that proof comes from observing the spooky quantum correlations between pairs of distant photons while closing the “loopholes” that might otherwise allow non-random bits to appear to be random. For example, the two measurement stations are positioned too far apart to allow hidden communications between them; by the laws of physics any such exchanges would be limited to the speed of light.
Random numbers are generated in two steps. First, the spooky action experiment generates a long string of bits through a “Bell test,” in which researchers measure correlations between the properties of the pairs of photons. The timing of the measurements ensures that the correlations cannot be explained by classical processes such as pre-existing conditions or exchanges of information at, or slower than, the speed of light. Statistical tests of the correlations demonstrate that quantum mechanics is at work, and these data allow the researchers to quantify the amount of randomness present in the long string of bits.
That randomness may be spread very thin throughout the long string of bits. For example, nearly every bit might be 0 with only a few being 1. To obtain a short, uniform string with concentrated randomness such that each bit has a 50/50 chance of being 0 or 1, a second step called “extraction” is performed. NIST researchers developed software to process the Bell test data into a shorter string of bits that are nearly uniform; that is, with 0s and 1s equally likely. The full process requires the input of two independent strings of random bits to select measurement settings for the Bell tests and to “seed” the software to help extract the randomness from the original data. NIST researchers used a conventional random number generator to generate these input strings.
From 55,110,210 trials of the Bell test, each of which produces two bits, researchers extracted 1,024 bits certified to be uniform to within one trillionth of 1 percent.
“A perfect coin toss would be uniform, and we made 1,024 bits almost perfectly uniform, each extremely close to equally likely to be 0 or 1,” Bierhorst said.
Other researchers have previously used Bell tests to generate random numbers, but the NIST method is the first to use a loophole-free Bell test and to process the resulting data through extraction. Extractors and seeds are already used in classical random number generators; in fact, random seeds are essential in computer security and can be used as encryption keys.
In the new NIST method, the final numbers are certified to be random even if the measurement settings and seed are publicly known; the only requirement is that the Bell test experiment be physically isolated from customers and hackers. “The idea is you get something better out (private randomness) than what you put in (public randomness),” Bierhorst said.
Peter Bierhorst, Emanuel Knill, Scott Glancy, Yanbao Zhang, Alan Mink, Stephen Jordan, Andrea Rommal, Yi-Kai Liu, Bradley Christensen, Sae Woo Nam, Martin J. Stevens, Lynden K. Shalm. Experimentally Generated Randomness Certified by the Impossibility of Superluminal Signals. Nature, 2018 DOI: 10.1038/s41586-018-0019-0
A laser in Shanghai, China, has set power records yet fits on tabletops.
Inside a cramped laboratory in Shanghai, China, physicist Ruxin Li and colleagues are breaking records with the most powerful pulses of light the world has ever seen. At the heart of their laser, called the Shanghai Superintense Ultrafast Laser Facility (SULF), is a single cylinder of titanium-doped sapphire about the width of a Frisbee. After kindling light in the crystal and shunting it through a system of lenses and mirrors, the SULF distills it into pulses of mind-boggling power. In 2016, it achieved an unprecedented 5.3 million billion watts, or petawatts (PW). The lights in Shanghai do not dim each time the laser fires, however. Although the pulses are extraordinarily powerful, they are also infinitesimally brief, lasting less than a trillionth of a second. The researchers are now upgrading their laser and hope to beat their own record by the end of this year with a 10-PW shot, which would pack more than 1000 times the power of all the world’s electrical grids combined.
The group’s ambitions don’t end there. This year, Li and colleagues intend to start building a 100-PW laser known as the Station of Extreme Light (SEL). By 2023, it could be flinging pulses into a chamber 20 meters underground, subjecting targets to extremes of temperature and pressure not normally found on Earth, a boon to astrophysicists and materials scientists alike. The laser could also power demonstrations of a new way to accelerate particles for use in medicine and high-energy physics. But most alluring, Li says, would be showing that light could tear electrons and their antimatter counterparts, positrons, from empty space—a phenomenon known as “breaking the vacuum.” It would be a striking illustration that matter and energy are interchangeable, as Albert Einstein’s famous E=mc2 equation states. Although nuclear weapons attest to the conversion of matter into immense amounts of heat and light, doing the reverse is not so easy. But Li says the SEL is up to the task. “That would be very exciting,” he says. “It would mean you could generate something from nothing.”
The Chinese group is “definitely leading the way” to 100 PW, says Philip Bucksbaum, an atomic physicist at Stanford University in Palo Alto, California. But there is plenty of competition. In the next few years, 10-PW devices should switch on in Romania and the Czech Republic as part of Europe’s Extreme Light Infrastructure, although the project recently put off its goal of building a 100-PW-scale device. Physicists in Russia have drawn up a design for a 180-PW laser known as the Exawatt Center for Extreme Light Studies (XCELS), while Japanese researchers have put forward proposals for a 30-PW device.
Largely missing from the fray are U.S. scientists, who have fallen behind in the race to high powers, according to a study published last month by a National Academies of Sciences, Engineering, and Medicine group that was chaired by Bucksbaum. The study calls on the Department of Energy to plan for at least one high-power laser facility, and that gives hope to researchers at the University of Rochester in New York, who are developing plans for a 75-PW laser, the Optical Parametric Amplifier Line (OPAL). It would take advantage of beamlines at OMEGA-EP, one of the country’s most powerful lasers. “The [Academies] report is encouraging,” says Jonathan Zuegel, who heads the OPAL.
Invented in 1960, lasers use an external “pump,” such as a flash lamp, to excite electrons within the atoms of a lasing material—usually a gas, crystal, or semiconductor. When one of these excited electrons falls back to its original state it emits a photon, which in turn stimulates another electron to emit a photon, and so on. Unlike the spreading beams of a flashlight, the photons in a laser emerge in a tightly packed stream at specific wavelengths.
Because power equals energy divided by time, there are basically two ways to maximize it: Either boost the energy of your laser, or shorten the duration of its pulses. In the 1970s, researchers at Lawrence Livermore National Laboratory (LLNL) in California focused on the former, boosting laser energy by routing beams through additional lasing crystals made of glass doped with neodymium. Beams above a certain intensity, however, can damage the amplifiers. To avoid this, LLNL had to make the amplifiers ever larger, many tens of centimeters in diameter. But in 1983, Gerard Mourou, now at the École Polytechnique near Paris, and his colleagues made a breakthrough. He realized that a short laser pulse could be stretched in time—thereby making it less intense—by a diffraction grating that spreads the pulse into its component colors. After being safely amplified to higher energies, the light could be recompressed with a second grating. The end result: a more powerful pulse and an intact amplifier.