Random thoughts about random subjects… From science to literature and between manga and watercolours, passing by data science and rugby; including film, physics and fiction, programming, pictures and puns.
Bosons — especially photons — have a natural tendency to clump together. In 1987, three physicists conducted a remarkable experiment demonstrating this clustering property, known as the Hong-Ou-Mandel effect. Recently, researchers at ULB’s Centre for Quantum Information and Communication have identified another way in which photons manifest their propensity to stick together. This research has just been published in Proceedings of the National Academy of Sciences.
Since the very beginning of quantum physics, a hundred years ago, it has been known that all particles in the universe fall into two categories: fermions and bosons. For instance, the protons found in atomic nuclei are fermions, while bosons include photons — which are particles of light- as well as the Brout-Englert-Higgs boson, for which François Englert, a professor at ULB, was awarded a Nobel Prize in Physics in 2013.
Bosons — especially photons — have a natural tendency to clump together. One of the most remarkable experiments that demonstrated photons’ tendency to coalesce was conducted in 1987, when three physicists identified an effect that was since named after them: the Hong-Ou-Mandel effect. If two photons are sent simultaneously, each towards a different side of a beam splitter (a sort of semitransparent mirror), one could expect that each photon will be either reflected or transmitted.
Logically, photons should sometimes be detected on opposite sides of this mirror, which would happen if both are reflected or if both are transmitted. However, the experiment has shown that this never actually happens: the two photons always end up on the same side of the mirror, as though they ‘preferred’ sticking together! In an article published recently in US journal Proceedings of the National Academy of Sciences, Nicolas Cerf — a professor at the Centre for Quantum Information and Communication (École polytechnique de Bruxelles) — and his former PhD student Michael Jabbour — now a postdoctoral researcher at the University of Cambridge — describe how they identified another way in which photons manifest their tendency to stay together. Instead of a semi-transparent mirror, the researchers used an optical amplifier, called an active component because it produces new photons. They were able to demonstrate the existence of an effect similar to the Hong-Ou-Mandel effect, but which in this case captures a new form of quantum interference.
Quantum physics tells us that the Hong-Ou-Mandel effect is a consequence of the interference phenomenon, coupled with the fact that both photons are absolutely identical. This means it is impossible to distinguish the trajectory in which both photons were reflected off the mirror on the one hand, and the trajectory in which both were transmitted through the mirror on the other hand; it is fundamentally impossible to tell the photons apart. The remarkable consequence of this is that both trajectories cancel each other out! As a result, the two photons are never observed on the two opposite sides of the mirror. This property of photons is quite elusive: if they were tiny balls, identical in every way, both of these trajectories could very well be observed. As is often the case, quantum physics is at odds with our classical intuition.
The two researchers from ULB and the University of Cambridge have demonstrated that the impossibility to differentiate the photons emitted by an optical amplifier produces an effect that may be even more surprising. Fundamentally, the interference that occurs on a semi-transparent mirror stems from the fact that if we imagine switching the two photons on either sides of the mirror, the resulting configuration is exactly identical. With an optical amplifier, on the other hand, the effect identified by Cerf and Jabbour must be understood by looking at photon exchanges not through space, but through time.
When two photons are sent into an optical amplifier, they can simply pass through unaffected. However, an optical amplifier can also produce (or destroy) a pair of twin photons: so another possibility is that both photons are eliminated and a new pair is created. In principle, it should be possible to tell which scenario has occurred based on whether the two photons exiting the optical amplifier are identical to those that were sent in. If it were possible to tell the pairs of photons apart, then the trajectories would be different and there would be no quantum effect. However, the researchers have found that the fundamental impossibility of telling photons apart in time (in other words, it is impossible to know whether they have been replaced inside the optical amplifier) completely eliminates the possibility itself of observing a pair of photons exiting the amplifier. This means the researchers have indeed identified a quantum interference phenomenon that occurs through time. Hopefully, an experiment will eventually confirm this fascinating prediction.
Two-boson quantum interference in time. Proceedings of the National Academy of Sciences, 2020; 202010827 DOI: 10.1073/pnas.2010827117
This is a reblog of an article in ScienceDaily. See the original here.
A research team has for the first time experimentally proved a century old quantum theory that relativistic particles can pass through a barrier with 100% transmission.
The perfect transmission of sound through a barrier is difficult to achieve, if not impossible based on our existing knowledge. This is also true with other energy forms such as light and heat.
A research team led by Professor Xiang Zhang, President of the University of Hong Kong (HKU) when he was a professor at the University of California, Berkeley, (UC Berkeley) has for the first time experimentally proved a century old quantum theory that relativistic particles can pass through a barrier with 100% transmission. The research findings have been published in the top academic journal Science.
Just as it would be difficult for us to jump over a thick high wall without enough energy accumulated. In contrast, it is predicted that a microscopic particle in the quantum world can pass through a barrier well beyond its energy regardless of the height or width of the barrier, as if it is “transparent.”
As early as 1929, theoretical physicist Oscar Klein proposed that a relativistic particle can penetrate a potential barrier with 100% transmission upon normal incidence on the barrier. Scientists called this exotic and counterintuitive phenomenon the “Klein tunneling” theory. In the following 100 odd years, scientists tried various approaches to experimentally test Klein tunneling, but the attempts were unsuccessful and direct experimental evidence is still lacking.
Professor Zhang’s team conducted the experiment in artificially designed phononic crystals with triangular lattice. The lattice’s linear dispersion properties make it possible to mimic the relativistic Dirac quasiparticle by sound excitation, which led to the successful experimental observation of Klein tunneling.
“This is an exciting discovery. Quantum physicists have always tried to observe Klein tunneling in elementary particle experiments, but it is a very difficult task. We designed a phononic crystal similar to graphene that can excite the relativistic quasiparticles, but unlike natural material of graphene, the geometry of the human-made phononic crystal can be adjusted freely to precisely achieve the ideal conditions that made it possible to the first direct observation of Klein tunneling,” said Professor Zhang.
The achievement not only represents a breakthrough in fundamental physics, but also presents a new platform for exploring emerging macroscale systems to be used in applications such as on-chip logic devices for sound manipulation, acoustic signal processing, and sound energy harvesting.
“In current acoustic communications, the transmission loss of acoustic energy on the interface is unavoidable. If the transmittance on the interface can be increased to nearly 100%, the efficiency of acoustic communications can be greatly improved, thus opening up cutting-edge applications. This is especially important when the surface or the interface play a role in hindering the accuracy acoustic detection such as underwater exploration. The experimental measurement is also conducive to the future development of studying quasiparticles with topological property in phononic crystals which might be difficult to perform in other systems,” said Dr. Xue Jiang, a former member of Zhang’s team and currently an Associate Researcher at the Department of Electronic Engineering at Fudan University.
Dr. Jiang pointed out that the research findings might also benefit the biomedical devices. It may help to improve the accuracy of ultrasound penetration through obstacles and reach designated targets such as tissues or organs, which could improve the ultrasound precision for better diagnosis and treatment.
On the basis of the current experiments, researchers can control the mass and dispersion of the quasiparticle by exciting the phononic crystals with different frequencies, thus achieving flexible experimental configuration and on/off control of Klein tunneling. This approach can be extended to other artificial structure for the study of optics and thermotics. It allows the unprecedent control of quasiparticle or wavefront, and contributes to the exploration on other complex quantum physical phenomena.
This is a reblog of a story in ScienceDaily. See the original here.
Underwhelming results underscore the complexity of language evolution while showing promise in some current applications
Researchers have investigated the ability of machine learning algorithms to identify lexical borrowings using word lists from a single language. Results show that current machine learning methods alone are insufficient for borrowing detection, confirming that additional data and expert knowledge are needed to tackle one of historical linguistics’ most pressing challenges.
Lexical borrowing, or the direct transfer of words from one language to another, has interested scholars for millennia, as evidenced already in Plato’s Kratylos dialogue, in which Socrates discusses the challenge imposed by borrowed words on etymological studies. In historical linguistics, lexical borrowings help researchers trace the evolution of modern languages and indicate cultural contact between distinct linguistic groups — whether recent or ancient. However, the techniques for identifying borrowed words have resisted formalization, demanding that researchers rely on a variety of proxy information and the comparison of multiple languages.
“The automated detection of lexical borrowings is still one of the most difficult tasks we face in computational historical linguistics,” says Johann-Mattis List, who led the study.
In the current study, researchers from PUCP and MPI-SHH employed different machine learning techniques to train language models that mimic the way in which linguists identify borrowings when considering only the evidence provided by a single language: if sounds or the ways in which sounds combine to form words are atypical when comparing them with other words in the same language, this often hints to recent borrowings. The models were then applied to a modified version of the World Loanword Database, a catalog of borrowing information for a sample of 40 languages from different language families all over the world, in order to see how accurately words within a given language would be classified as borrowed or not by the different techniques.
In many cases the results were unsatisfying, suggesting that loanword detection is too difficult for machine learning methods most commonly used. However, in specific situations, such as in lists with a high proportion of loanwords or in languages whose loanwords come primarily from a single donor language, the teams’ lexical language models showed some promise.
“After these first experiments with monolingual lexical borrowings, we can proceed to stake out other aspects of the problem, moving into multilingual and cross-linguistic approaches,” says John Miller of PUCP, the study’s co-lead author.
“Our computer-assisted approach, along with the dataset we are releasing, will shed a new light on the importance of computer-assisted methods for language comparison and historical linguistics,” adds Tiago Tresoldi, the study’s other co-lead author from MPI-SHH.
The study joins ongoing efforts to tackle one of the most challenging problems in historical linguistics, showing that loanword detection cannot rely on mono-lingual information alone. In the future, the authors hope to develop better-integrated approaches that take multi-lingual information into account.
Using lexical language models to detect borrowings in monolingual wordlists. PLOS ONE, 2020; 15 (12): e0242709 DOI: 10.1371/journal.pone.0242709
Researchers at the University of Tsukuba have created a new carbon-based electrical device, π-ion gel transistors (PIGTs), by using an ionic gel made of a conductive polymer. This work may lead to cheaper and more reliable flexible printable electronics.
Organic conductors, which are carbon-based polymers that can carry electrical currents, have the potential to radically change the way electronic devices are manufactured. These conductors have properties that can be tuned via chemical modification and may be easily printed as circuits. Compared with current silicon solar panels and transistors, systems based on organic conductors could be flexible and easier to install. However, their electrical conductivity can be drastically reduced if the conjugated polymer chains become disordered because of incorrect processing, which greatly limits their ability to compete with existing technologies.
Now, a team of researchers led by the University of Tsukuba have formulated a novel method for preserving the electrical properties of organic conductors by forming an “ion gel.” In this case, the solvent around the poly(para-phenyleneethynylene) (PPE) chains was replaced with an ionic liquid, which then turned into a gel. Using confocal fluorescent microscopy and scanning electron microscopy, the researchers were able to verify the morphology of the organic conductor.
“We showed that the internal structure of our π-ion gel is a nanofiber network of PPE, which is very good at reliably conducting electricity” says author Professor Yohei Yamamoto.
In addition to acting as wires for delocalized electrons, the polymer chains direct the flow of mobile ions, which can help move charge-carriers to the carbon rings. This allows current to flow through the entire volume of the device. The resulting transistor can switch on and off in response to voltage changes in less than 20 microseconds — which is faster than any previous device of this type.
“We plan to use this advance in supramolecular chemistry and organic electronics to design a whole arrange of flexible electronic devices,” explains Professor Yamamoto. The fast response time and high conductivity open the way for flexible sensors that enjoy the ease of fabrication associated with organic conductors, without sacrificing speed or performance.
This is s a reblog of an article in ScienceDaily. See the original here.
A newly-designed atomic clock uses entangled atoms to keep time even more precisely than its state-of-the-art counterparts. The design could help scientists detect dark matter and study gravity’s effect on time.
Atomic clocks are the most precise timekeepers in the world. These exquisite instruments use lasers to measure the vibrations of atoms, which oscillate at a constant frequency, like many microscopic pendulums swinging in sync. The best atomic clocks in the world keep time with such precision that, if they had been running since the beginning of the universe, they would only be off by about half a second today.
Still, they could be even more precise. If atomic clocks could more accurately measure atomic vibrations, they would be sensitive enough to detect phenomena such as dark matter and gravitational waves. With better atomic clocks, scientists could also start to answer some mind-bending questions, such as what effect gravity might have on the passage of time and whether time itself changes as the universe ages.
Now a new kind of atomic clock designed by MIT physicists may enable scientists explore such questions and possibly reveal new physics.
The researchers report in the journal Nature that they have built an atomic clock that measures not a cloud of randomly oscillating atoms, as state-of-the-art designs measure now, but instead atoms that have been quantumly entangled. The atoms are correlated in a way that is impossible according to the laws of classical physics, and that allows the scientists to measure the atoms’ vibrations more accurately.
The new setup can achieve the same precision four times faster than clocks without entanglement.
“Entanglement-enhanced optical atomic clocks will have the potential to reach a better precision in one second than current state-of-the-art optical clocks,” says lead author Edwin Pedrozo-Peñafiel, a postdoc in MIT’s Research Laboratory of Electronics.advertisement
If state-of-the-art atomic clocks were adapted to measure entangled atoms the way the MIT team’s setup does, their timing would improve such that, over the entire age of the universe, the clocks would be less than 100 milliseconds off.
The paper’s other co-authors from MIT are Simone Colombo, Chi Shu, Albert Adiyatullin, Zeyang Li, Enrique Mendez, Boris Braverman, Akio Kawasaki, Saisuke Akamatsu, Yanhong Xiao, and Vladan Vuletic, the Lester Wolfe Professor of Physics.
Since humans began tracking the passage of time, they have done so using periodic phenomena, such as the motion of the sun across the sky. Today, vibrations in atoms are the most stable periodic events that scientists can observe. Furthermore, one cesium atom will oscillate at exactly the same frequency as another cesium atom.
To keep perfect time, clocks would ideally track the oscillations of a single atom. But at that scale, an atom is so small that it behaves according to the mysterious rules of quantum mechanics: When measured, it behaves like a flipped coin that only when averaged over many flips gives the correct probabilities. This limitation is what physicists refer to as the Standard Quantum Limit.advertisement
“When you increase the number of atoms, the average given by all these atoms goes toward something that gives the correct value,” says Colombo.
This is why today’s atomic clocks are designed to measure a gas composed of thousands of the same type of atom, in order to get an estimate of their average oscillations. A typical atomic clock does this by first using a system of lasers to corral a gas of ultracooled atoms into a trap formed by a laser. A second, very stable laser, with a frequency close to that of the atoms’ vibrations, is sent to probe the atomic oscillation and thereby keep track of time.
And yet, the Standard Quantum Limit is still at work, meaning there is still some uncertainty, even among thousands of atoms, regarding their exact individual frequencies. This is where Vuletic and his group have shown that quantum entanglement may help. In general, quantum entanglement describes a nonclassical physical state, in which atoms in a group show correlated measurement results, even though each individual atom behaves like the random toss of a coin.
The team reasoned that if atoms are entangled, their individual oscillations would tighten up around a common frequency, with less deviation than if they were not entangled. The average oscillations that an atomic clock would measure, therefore, would have a precision beyond the Standard Quantum Limit.
In their new atomic clock, Vuletic and his colleagues entangle around 350 atoms of ytterbium, which oscillates at the same very high frequency as visible light, meaning any one atom vibrates 100,000 times more often in one second than cesium. If ytterbium’s oscillations can be tracked precisely, scientists can use the atoms to distinguish ever smaller intervals of time.
The group used standard techniques to cool the atoms and trap them in an optical cavity formed by two mirrors. They then sent a laser through the optical cavity, where it ping-ponged between the mirrors, interacting with the atoms thousands of times.
“It’s like the light serves as a communication link between atoms,” Shu explains. “The first atom that sees this light will modify the light slightly, and that light also modifies the second atom, and the third atom, and through many cycles, the atoms collectively know each other and start behaving similarly.”
In this way, the researchers quantumly entangle the atoms, and then use another laser, similar to existing atomic clocks, to measure their average frequency. When the team ran a similar experiment without entangling atoms, they found that the atomic clock with entangled atoms reached a desired precision four times faster.
“You can always make the clock more accurate by measuring longer,” Vuletic says. “The question is, how long do you need to reach a certain precision. Many phenomena need to be measured on fast timescales.”
He says if today’s state-of-the-art atomic clocks can be adapted to measure quantumly entangled atoms, they would not only keep better time, but they could help decipher signals in the universe such as dark matter and gravitational waves, and start to answer some age-old questions.
“As the universe ages, does the speed of light change? Does the charge of the electron change?” Vuletic says. “That’s what you can probe with more precise atomic clocks.”
The famous patient Henry Molaison (long known as H.M.) suffered damage to his hippocampus after a surgical attempt to cure his epilepsy. As a result, he had anterograde amnesia, which meant that things he learned never made it past his short-term memory. Though his memories of childhood remained intact, H.M. might meet with his doctor and five minutes later say, ‘Oh, I don’t think I’ve ever met you. What’s your name?’
H.M. helped scientists understand the role of the hippocampus in learning, but a mystery remains around how signals from it somehow get shared with the billions of neurons throughout the cortex that change in a coordinated fashion when we learn. In a paper published today in the journal Science, a collaboration between University of Ottawa and Humbolt University of Berlin reveals a critical role for a brain area called the perirhinal cortex in managing this learning process.
The study involved mice and rats learning a rather strange brain-based skill. A single neuron in the sensory cortex was stimulated, and the rodent had to show it had felt the buzz by licking a dispenser to receive some sweetened water. No one can say for sure what that brain stimulation feels like for the animal, but the team’s best guess is that it mimics the feeling of something touching its whiskers.
As they watched the brain responding to this learning experience, the team observed that the perirhinal cortex was serving as a waystation between the nearby hippocampus, which processes place and context, and the outer layer of the cortex.
“The perirhinal cortex happens to be at the very top of the hierarchy of processing of information in the cortex. It accumulates information from multiple senses and then sends it back to the rest of the cortex,” says Dr. Richard Naud, an assistant professor in the Faculty of Medicine’s Department of Cellular and Molecular Medicine, and in the Brain and Mind Research Institute. “What we are showing is that it has a very important role in coordinating learning. Without these projections coming back from the conceptual area, the animals are not able to learn anymore.”
Previous studies have focused on communication from the hippocampus upward into the decision-making regions of the brain like the perirhinal cortex, but there has not been as much attention paid to what the perirhinal cortex does with that information, and what it sends back down to Layer 1 of the cortex. It turns out this step is a key part of the process, without which learning is impossible.
“When the connection from the perirhinal cortex back to those layer 1 neurons was cut, the animals acted a lot like H.M. They were improving a little bit, but it wouldn’t stick. They would just learn and forget, learn and forget, learn and forget,” says Dr. Naud.
A computational neuroscientist with a background in physics, Dr. Naud was responsible for statistical analyses, as well as the creation of computational models that map out the brain’s information processing. Of particular interest to him was confirmation of what he had long suspected: that rapid bursts of firing from a neuron have a distinctive meaning, apart from what is meant by a slower pace of electrical activity. When the animals were in the midst of learning, these rapid-fire action potentials lit up the monitored cells.
The team was able to recreate the burst effect artificially as well.
“If you force the same number of action potentials but at a high frequency, then the animal is better at detecting it,” says Dr. Naud. “This would imply that bursts are correlated with learning and causally related to perception. Meaning that you are more likely to perceive something if it creates a burst in your neurons.”
The next challenge is to figure out exactly what that learning signal from the perirhinal cortex to the lower order brain areas looks like. Dr. Naud is busy working on a computational model relating our existing knowledge of physiology to what this experiment is seeing.
This is a reblog of an article in ScienceDaily. See the original here.
Quantum computers have already managed to surpass ordinary computers in solving certain tasks — unfortunately, totally useless ones. The next milestone is to get them to do useful things. Researchers at Chalmers University of Technology, Sweden, have now shown that they can solve a small part of a real logistics problem with their small, but well-functioning quantum computer.
Interest in building quantum computers has gained considerable momentum in recent years, and feverish work is underway in many parts of the world. In 2019, Google’s research team made a major breakthrough when their quantum computer managed to solve a task far more quickly than the world’s best supercomputer. The downside is that the solved task had no practical use whatsoever — it was chosen because it was judged to be easy to solve for a quantum computer, yet very difficult for a conventional computer.
Therefore, an important task is now to find useful, relevant problems that are beyond the reach of ordinary computers, but which a relatively small quantum computer could solve.
“We want to be sure that the quantum computer we are developing can help solve relevant problems early on. Therefore, we work in close collaboration with industrial companies,” says theoretical physicist Giulia Ferrini, one of the leaders of Chalmers University of Technology’s quantum computer project, which began in 2018.
Together with Göran Johansson, Giulia Ferrini led the theoretical work when a team of researchers at Chalmers, including an industrial doctoral student from the aviation logistics company Jeppesen, recently showed that a quantum computer can solve an instance of a real problem in the aviation industry.
The algorithm proven on two qubits All airlines are faced with scheduling problems. For example, assigning individual aircraft to different routes represents an optimisation problem, one that grows very rapidly in size and complexity as the number of routes and aircraft increases.
Researchers hope that quantum computers will eventually be better at handling such problems than today’s computers. The basic building block of the quantum computer — the qubit — is based on completely different principles than the building blocks of today’s computers, allowing them to handle enormous amounts of information with relatively few qubits.
However, due to their different structure and function, quantum computers must be programmed in other ways than conventional computers. One proposed algorithm that is believed to be useful on early quantum computers is the so-called Quantum Approximate Optimization Algorithm (QAOA).
The Chalmers research team has now successfully executed said algorithm on their quantum computer — a processor with two qubits — and they showed that it can successfully solve the problem of assigning aircraft to routes. In this first demonstration, the result could be easily verified as the scale was very small — it involved only two airplanes.
Potential to handle many aircraft With this feat, the researchers were first to show that the QAOA algorithm can solve the problem of assigning aircraft to routes in practice. They also managed to run the algorithm one level further than anyone before, an achievement that requires very good hardware and accurate control.
“We have shown that we have the ability to map relevant problems onto our quantum processor. We still have a small number of qubits, but they work well. Our plan has been to first make everything work very well on a small scale, before scaling up,” says Jonas Bylander, senior researcher responsible for the experimental design, and one of the leaders of the project of building a quantum computer at Chalmers.
The theorists in the research team also simulated solving the same optimisation problem for up to 278 aircraft, which would require a quantum computer with 25 qubits.
“The results remained good as we scaled up. This suggests that the QAOA algorithm has the potential to solve this type of problem at even larger scales,” says Giulia Ferrini.
Surpassing today’s best computers would, however, require much larger devices. The researchers at Chalmers have now begun scaling up and are currently working with five quantum bits. The plan is to reach at least 20 qubits by 2021 while maintaining the high quality.
Applying the Quantum Approximate Optimization Algorithm to the Tail-Assignment Problem. Physical Review Applied, 2020; 14 (3) DOI: 10.1103/PhysRevApplied.14.034009
Researchers at the Institute of Biotechnology (IBt) of Universidad Autónoma de México (UNAM) have created a promising plant-based anti-inflammatory to treat obesity and Alzheimer’s.
Using extracts from the Malva parviflora the researchers have shown that the drug is effective in mice to combat the inflammatory process that occurs in these chronic degenerative diseases.
Inflammation is a natural response of the body to different pathogens. It helps to create an adequate immune response and it can also help repair tissue damaged by trauma.
This process is essential for the body to return to homeostasis (self-regulation phenomenon) once it has eliminated the pathogen or repaired the tissue. On the other hand, we now we know that low-tone chronic inflammation is a common factor in many chronic degenerative diseases. Martín Gustavo Pedraza Alva, researcher at the Institute of Biotechnology reminds us that it is therefore important to understand this process at a molecular level – how this process begins and how we could regulate it.
Together with Leonor Pérez Martínez, Pedraza Alva are part of the Neuroimmunobiology Consortium in the IBt Department of Molecular Medicine and Bioprocesses, where they use mice with obesity and Alzheimer’s for their modelling. The researcher pointed out that they are working with the Malva parviflora plant, from which they prepare an extract that is tested in models of Alzheimer’s and obesity.
Administering this hydroalcoholic extract delays the appearance of the marks of the disease. The animals that receive it maintain their cognitive capacity, decrease the accumulation of senile plaques and all the inflammation markers are diminished within the central nervous system, he said.
In mice that were given a high-fat diet, which normally develop insulin resistance and glucose intolerance, the Malva parviflora extract prevented glucose metabolism disorders and maintained their sensitivity to insulin and glucose tolerance.
A Malva parviflora´s fraction prevents the deleterious effects resulting from neuroinflammation. DOI: 10.1016/j.biopha.2019.109349 https://pubmed.ncbi.nlm.nih.gov/31545221/
In a recent paper published in the ArXiV, researchers have highlighted the advantages that artificial intelligence techniques bring to the research of fields such as astrophysics. They are making their models available and that is always a great thing to see. They mention the use of these techniques to detect binary neutron stars, and to forecast the merger of multi-messenger sources, such as binary neutron stars and neutron star-black hole systems. Here are some highlights from the paper:
Finding new ways to use artificial intelligence (AI) to accelerate the analysis of gravitational wave data, and ensuring the developed models are easily reusable promises to unlock new opportunities in multi-messenger astrophysics (MMA), and to enable wider use, rigorous validation, and sharing of developed models by the community. In this work, we demonstrate how connecting recently deployed DOE and NSF-sponsored cyberinfrastructure allows for new ways to publish models, and to subsequently deploy these models into applications using computing platforms ranging from laptops to high performance computing clusters. We develop a workflow that connects the Data and Learning Hub for Science (DLHub), a repository for publishing machine learning models, with the Hardware Accelerated Learning (HAL) deep learning computing cluster, using funcX as a universal distributed computing service. We then use this workflow to search for binary black hole gravitational wave signals in open source advanced LIGO data. We find that using this workflow, an ensemble of four openly available deep learning models can be run on HAL and process the entire month of August 2017 of advanced LIGO data in just seven minutes, identifying all four binary black hole mergers previously identified in this dataset, and reporting no misclassifications. This approach, which combines advances in AI, distributed computing, and scientific data infrastructure opens new pathways to conduct reproducible, accelerated, data-driven gravitational wave detection.
Research and development of AI models for gravitational wave astrophysics is evolving at a rapid pace. In less than four years, this area of research has evolved from disruptive prototypes into sophisticated AI algorithms that describe the same 4-D signal manifold as traditional gravitational wave detection pipelines for binary black hole mergers, namely, quasi-circular, spinning, non- precessing, binary systems; have the same sensitivity as template matching algorithms; and are orders of magnitude faster, at a fraction of the computational cost.
AI models have been proven to effectively identify real gravitational wave signals in advanced LIGO data, including binary black hole and neutron stars mergers. The current pace of progress makes it clear that the broader community will continue to advance the development of AI tools to realize the science goals of Multi-Messenger Astrophysics.
Furthermore, mirroring the successful approach of corporations leading AI innovation in industry and technology, we are releasing our AI models to enable the broader community to use and perfect them. This approach is also helpful to address healthy and constructive skepticism from members of the community who do not feel at ease using AI algorithms.