A collection of posts related to Science and Technology, including Physics, Mathematics, Statistics, Biology, Chemistry, etc, etc etc.

Take a look and enjoy

Meet the Newest Member of the Fluorescent Mammal Club


Meet the Newest Member of the Fluorescent Mammal Club The springhare — whose coat glows a patchy pinkish-orange under UV light — joins the platypus and other mammals with this perplexing trait

Relates post Fluorescent Platypuses (??) and A Question Hidden in the Platypus Genome: Are We the Weird Ones? – The New York Times

Read me...

A Question Hidden in the Platypus Genome: Are We the Weird Ones? - The New York Times

Researchers have produced the most comprehensive platypus genome yet, as well as that of another monotreme, an echidna.
— Read on www.nytimes.com/2021/01/09/science/platypus-genome-echidna.html

Read me...

Quantum interference in time

Bosons -- especially photons -- have a natural tendency to clump together. In 1987, three physicists conducted a remarkable experiment demonstrating this clustering property, known as the Hong-Ou-Mandel effect. Recently, researchers at ULB's Centre for Quantum Information and Communication have identified another way in which photons manifest their propensity to stick together. This research has just been published in Proceedings of the National Academy of Sciences.

Since the very beginning of quantum physics, a hundred years ago, it has been known that all particles in the universe fall into two categories: fermions and bosons. For instance, the protons found in atomic nuclei are fermions, while bosons include photons -- which are particles of light- as well as the Brout-Englert-Higgs boson, for which François Englert, a professor at ULB, was awarded a Nobel Prize in Physics in 2013.

Bosons -- especially photons -- have a natural tendency to clump together. One of the most remarkable experiments that demonstrated photons' tendency to coalesce was conducted in 1987, when three physicists identified an effect that was since named after them: the Hong-Ou-Mandel effect. If two photons are sent simultaneously, each towards a different side of a beam splitter (a sort of semitransparent mirror), one could expect that each photon will be either reflected or transmitted.

Logically, photons should sometimes be detected on opposite sides of this mirror, which would happen if both are reflected or if both are transmitted. However, the experiment has shown that this never actually happens: the two photons always end up on the same side of the mirror, as though they 'preferred' sticking together! In an article published recently in US journal Proceedings of the National Academy of Sciences, Nicolas Cerf -- a professor at the Centre for Quantum Information and Communication (École polytechnique de Bruxelles) -- and his former PhD student Michael Jabbour -- now a postdoctoral researcher at the University of Cambridge -- describe how they identified another way in which photons manifest their tendency to stay together. Instead of a semi-transparent mirror, the researchers used an optical amplifier, called an active component because it produces new photons. They were able to demonstrate the existence of an effect similar to the Hong-Ou-Mandel effect, but which in this case captures a new form of quantum interference.

Quantum physics tells us that the Hong-Ou-Mandel effect is a consequence of the interference phenomenon, coupled with the fact that both photons are absolutely identical. This means it is impossible to distinguish the trajectory in which both photons were reflected off the mirror on the one hand, and the trajectory in which both were transmitted through the mirror on the other hand; it is fundamentally impossible to tell the photons apart. The remarkable consequence of this is that both trajectories cancel each other out! As a result, the two photons are never observed on the two opposite sides of the mirror. This property of photons is quite elusive: if they were tiny balls, identical in every way, both of these trajectories could very well be observed. As is often the case, quantum physics is at odds with our classical intuition.

The two researchers from ULB and the University of Cambridge have demonstrated that the impossibility to differentiate the photons emitted by an optical amplifier produces an effect that may be even more surprising. Fundamentally, the interference that occurs on a semi-transparent mirror stems from the fact that if we imagine switching the two photons on either sides of the mirror, the resulting configuration is exactly identical. With an optical amplifier, on the other hand, the effect identified by Cerf and Jabbour must be understood by looking at photon exchanges not through space, but through time.

When two photons are sent into an optical amplifier, they can simply pass through unaffected. However, an optical amplifier can also produce (or destroy) a pair of twin photons: so another possibility is that both photons are eliminated and a new pair is created. In principle, it should be possible to tell which scenario has occurred based on whether the two photons exiting the optical amplifier are identical to those that were sent in. If it were possible to tell the pairs of photons apart, then the trajectories would be different and there would be no quantum effect. However, the researchers have found that the fundamental impossibility of telling photons apart in time (in other words, it is impossible to know whether they have been replaced inside the optical amplifier) completely eliminates the possibility itself of observing a pair of photons exiting the amplifier. This means the researchers have indeed identified a quantum interference phenomenon that occurs through time. Hopefully, an experiment will eventually confirm this fascinating prediction.

Two-boson quantum interference in time. Proceedings of the National Academy of Sciences, 2020; 202010827 DOI: 10.1073/pnas.2010827117

Read me...

Sci-Advent - Perfect quantum transmission through barrier using sound

This is a reblog of an article in ScienceDaily. See the original here.

A research team has for the first time experimentally proved a century old quantum theory that relativistic particles can pass through a barrier with 100% transmission.

The perfect transmission of sound through a barrier is difficult to achieve, if not impossible based on our existing knowledge. This is also true with other energy forms such as light and heat.

A research team led by Professor Xiang Zhang, President of the University of Hong Kong (HKU) when he was a professor at the University of California, Berkeley, (UC Berkeley) has for the first time experimentally proved a century old quantum theory that relativistic particles can pass through a barrier with 100% transmission. The research findings have been published in the top academic journal Science.

Just as it would be difficult for us to jump over a thick high wall without enough energy accumulated. In contrast, it is predicted that a microscopic particle in the quantum world can pass through a barrier well beyond its energy regardless of the height or width of the barrier, as if it is "transparent."

As early as 1929, theoretical physicist Oscar Klein proposed that a relativistic particle can penetrate a potential barrier with 100% transmission upon normal incidence on the barrier. Scientists called this exotic and counterintuitive phenomenon the "Klein tunneling" theory. In the following 100 odd years, scientists tried various approaches to experimentally test Klein tunneling, but the attempts were unsuccessful and direct experimental evidence is still lacking.

Professor Zhang's team conducted the experiment in artificially designed phononic crystals with triangular lattice. The lattice's linear dispersion properties make it possible to mimic the relativistic Dirac quasiparticle by sound excitation, which led to the successful experimental observation of Klein tunneling.

"This is an exciting discovery. Quantum physicists have always tried to observe Klein tunneling in elementary particle experiments, but it is a very difficult task. We designed a phononic crystal similar to graphene that can excite the relativistic quasiparticles, but unlike natural material of graphene, the geometry of the human-made phononic crystal can be adjusted freely to precisely achieve the ideal conditions that made it possible to the first direct observation of Klein tunneling," said Professor Zhang.

The achievement not only represents a breakthrough in fundamental physics, but also presents a new platform for exploring emerging macroscale systems to be used in applications such as on-chip logic devices for sound manipulation, acoustic signal processing, and sound energy harvesting.

"In current acoustic communications, the transmission loss of acoustic energy on the interface is unavoidable. If the transmittance on the interface can be increased to nearly 100%, the efficiency of acoustic communications can be greatly improved, thus opening up cutting-edge applications. This is especially important when the surface or the interface play a role in hindering the accuracy acoustic detection such as underwater exploration. The experimental measurement is also conducive to the future development of studying quasiparticles with topological property in phononic crystals which might be difficult to perform in other systems," said Dr. Xue Jiang, a former member of Zhang's team and currently an Associate Researcher at the Department of Electronic Engineering at Fudan University.

Dr. Jiang pointed out that the research findings might also benefit the biomedical devices. It may help to improve the accuracy of ultrasound penetration through obstacles and reach designated targets such as tissues or organs, which could improve the ultrasound precision for better diagnosis and treatment.

On the basis of the current experiments, researchers can control the mass and dispersion of the quasiparticle by exciting the phononic crystals with different frequencies, thus achieving flexible experimental configuration and on/off control of Klein tunneling. This approach can be extended to other artificial structure for the study of optics and thermotics. It allows the unprecedent control of quasiparticle or wavefront, and contributes to the exploration on other complex quantum physical phenomena.

Direct observation of Klein tunneling in phononic crystalsScience, 2020 DOI: 10.1126/science.abe2011

Read me...

Sci-Advent - New study tests machine learning on detection of borrowed words in world languages

This is a reblog of a story in ScienceDaily. See the original here.

Underwhelming results underscore the complexity of language evolution while showing promise in some current applications

Researchers have investigated the ability of machine learning algorithms to identify lexical borrowings using word lists from a single language. Results show that current machine learning methods alone are insufficient for borrowing detection, confirming that additional data and expert knowledge are needed to tackle one of historical linguistics' most pressing challenges.

Lexical borrowing, or the direct transfer of words from one language to another, has interested scholars for millennia, as evidenced already in Plato's Kratylos dialogue, in which Socrates discusses the challenge imposed by borrowed words on etymological studies. In historical linguistics, lexical borrowings help researchers trace the evolution of modern languages and indicate cultural contact between distinct linguistic groups -- whether recent or ancient. However, the techniques for identifying borrowed words have resisted formalization, demanding that researchers rely on a variety of proxy information and the comparison of multiple languages.

"The automated detection of lexical borrowings is still one of the most difficult tasks we face in computational historical linguistics," says Johann-Mattis List, who led the study.

In the current study, researchers from PUCP and MPI-SHH employed different machine learning techniques to train language models that mimic the way in which linguists identify borrowings when considering only the evidence provided by a single language: if sounds or the ways in which sounds combine to form words are atypical when comparing them with other words in the same language, this often hints to recent borrowings. The models were then applied to a modified version of the World Loanword Database, a catalog of borrowing information for a sample of 40 languages from different language families all over the world, in order to see how accurately words within a given language would be classified as borrowed or not by the different techniques.

In many cases the results were unsatisfying, suggesting that loanword detection is too difficult for machine learning methods most commonly used. However, in specific situations, such as in lists with a high proportion of loanwords or in languages whose loanwords come primarily from a single donor language, the teams' lexical language models showed some promise.

"After these first experiments with monolingual lexical borrowings, we can proceed to stake out other aspects of the problem, moving into multilingual and cross-linguistic approaches," says John Miller of PUCP, the study's co-lead author.

"Our computer-assisted approach, along with the dataset we are releasing, will shed a new light on the importance of computer-assisted methods for language comparison and historical linguistics," adds Tiago Tresoldi, the study's other co-lead author from MPI-SHH.

The study joins ongoing efforts to tackle one of the most challenging problems in historical linguistics, showing that loanword detection cannot rely on mono-lingual information alone. In the future, the authors hope to develop better-integrated approaches that take multi-lingual information into account.

Using lexical language models to detect borrowings in monolingual wordlistsPLOS ONE, 2020; 15 (12): e0242709 DOI: 10.1371/journal.pone.0242709

Read me...

Sci-Advent - Flexible and powerful electronics

Researchers at the University of Tsukuba have created a new carbon-based electrical device, π-ion gel transistors (PIGTs), by using an ionic gel made of a conductive polymer. This work may lead to cheaper and more reliable flexible printable electronics.

Organic conductors, which are carbon-based polymers that can carry electrical currents, have the potential to radically change the way electronic devices are manufactured. These conductors have properties that can be tuned via chemical modification and may be easily printed as circuits. Compared with current silicon solar panels and transistors, systems based on organic conductors could be flexible and easier to install. However, their electrical conductivity can be drastically reduced if the conjugated polymer chains become disordered because of incorrect processing, which greatly limits their ability to compete with existing technologies.

Now, a team of researchers led by the University of Tsukuba have formulated a novel method for preserving the electrical properties of organic conductors by forming an "ion gel." In this case, the solvent around the poly(para-phenyleneethynylene) (PPE) chains was replaced with an ionic liquid, which then turned into a gel. Using confocal fluorescent microscopy and scanning electron microscopy, the researchers were able to verify the morphology of the organic conductor.

"We showed that the internal structure of our π-ion gel is a nanofiber network of PPE, which is very good at reliably conducting electricity" says author Professor Yohei Yamamoto.

In addition to acting as wires for delocalized electrons, the polymer chains direct the flow of mobile ions, which can help move charge-carriers to the carbon rings. This allows current to flow through the entire volume of the device. The resulting transistor can switch on and off in response to voltage changes in less than 20 microseconds -- which is faster than any previous device of this type.

"We plan to use this advance in supramolecular chemistry and organic electronics to design a whole arrange of flexible electronic devices," explains Professor Yamamoto. The fast response time and high conductivity open the way for flexible sensors that enjoy the ease of fabrication associated with organic conductors, without sacrificing speed or performance.

Fast Response Organic Supramolecular Transistors Utilizing In‐Situ π‐Ion Gels. Advanced Materials, 2020; 2006061 DOI: 10.1002/adma.202006061

Read me...

Sci-Advent - New type of atomic clock keeps time even more precisely

This is s a reblog of an article in ScienceDaily. See the original here.

A newly-designed atomic clock uses entangled atoms to keep time even more precisely than its state-of-the-art counterparts. The design could help scientists detect dark matter and study gravity's effect on time.

Atomic clocks are the most precise timekeepers in the world. These exquisite instruments use lasers to measure the vibrations of atoms, which oscillate at a constant frequency, like many microscopic pendulums swinging in sync. The best atomic clocks in the world keep time with such precision that, if they had been running since the beginning of the universe, they would only be off by about half a second today.

Still, they could be even more precise. If atomic clocks could more accurately measure atomic vibrations, they would be sensitive enough to detect phenomena such as dark matter and gravitational waves. With better atomic clocks, scientists could also start to answer some mind-bending questions, such as what effect gravity might have on the passage of time and whether time itself changes as the universe ages.

Now a new kind of atomic clock designed by MIT physicists may enable scientists explore such questions and possibly reveal new physics.

The researchers report in the journal Nature that they have built an atomic clock that measures not a cloud of randomly oscillating atoms, as state-of-the-art designs measure now, but instead atoms that have been quantumly entangled. The atoms are correlated in a way that is impossible according to the laws of classical physics, and that allows the scientists to measure the atoms' vibrations more accurately.

The new setup can achieve the same precision four times faster than clocks without entanglement.

"Entanglement-enhanced optical atomic clocks will have the potential to reach a better precision in one second than current state-of-the-art optical clocks," says lead author Edwin Pedrozo-Peñafiel, a postdoc in MIT's Research Laboratory of Electronics.advertisement

If state-of-the-art atomic clocks were adapted to measure entangled atoms the way the MIT team's setup does, their timing would improve such that, over the entire age of the universe, the clocks would be less than 100 milliseconds off.

The paper's other co-authors from MIT are Simone Colombo, Chi Shu, Albert Adiyatullin, Zeyang Li, Enrique Mendez, Boris Braverman, Akio Kawasaki, Saisuke Akamatsu, Yanhong Xiao, and Vladan Vuletic, the Lester Wolfe Professor of Physics.

Time limit 

Since humans began tracking the passage of time, they have done so using periodic phenomena, such as the motion of the sun across the sky. Today, vibrations in atoms are the most stable periodic events that scientists can observe. Furthermore, one cesium atom will oscillate at exactly the same frequency as another cesium atom.

To keep perfect time, clocks would ideally track the oscillations of a single atom. But at that scale, an atom is so small that it behaves according to the mysterious rules of quantum mechanics: When measured, it behaves like a flipped coin that only when averaged over many flips gives the correct probabilities. This limitation is what physicists refer to as the Standard Quantum Limit.advertisement

"When you increase the number of atoms, the average given by all these atoms goes toward something that gives the correct value," says Colombo.

This is why today's atomic clocks are designed to measure a gas composed of thousands of the same type of atom, in order to get an estimate of their average oscillations. A typical atomic clock does this by first using a system of lasers to corral a gas of ultracooled atoms into a trap formed by a laser. A second, very stable laser, with a frequency close to that of the atoms' vibrations, is sent to probe the atomic oscillation and thereby keep track of time.

And yet, the Standard Quantum Limit is still at work, meaning there is still some uncertainty, even among thousands of atoms, regarding their exact individual frequencies. This is where Vuletic and his group have shown that quantum entanglement may help. In general, quantum entanglement describes a nonclassical physical state, in which atoms in a group show correlated measurement results, even though each individual atom behaves like the random toss of a coin.

The team reasoned that if atoms are entangled, their individual oscillations would tighten up around a common frequency, with less deviation than if they were not entangled. The average oscillations that an atomic clock would measure, therefore, would have a precision beyond the Standard Quantum Limit.

Entangled clocks 

In their new atomic clock, Vuletic and his colleagues entangle around 350 atoms of ytterbium, which oscillates at the same very high frequency as visible light, meaning any one atom vibrates 100,000 times more often in one second than cesium. If ytterbium's oscillations can be tracked precisely, scientists can use the atoms to distinguish ever smaller intervals of time.

The group used standard techniques to cool the atoms and trap them in an optical cavity formed by two mirrors. They then sent a laser through the optical cavity, where it ping-ponged between the mirrors, interacting with the atoms thousands of times.

"It's like the light serves as a communication link between atoms," Shu explains. "The first atom that sees this light will modify the light slightly, and that light also modifies the second atom, and the third atom, and through many cycles, the atoms collectively know each other and start behaving similarly."

In this way, the researchers quantumly entangle the atoms, and then use another laser, similar to existing atomic clocks, to measure their average frequency. When the team ran a similar experiment without entangling atoms, they found that the atomic clock with entangled atoms reached a desired precision four times faster.

"You can always make the clock more accurate by measuring longer," Vuletic says. "The question is, how long do you need to reach a certain precision. Many phenomena need to be measured on fast timescales."

He says if today's state-of-the-art atomic clocks can be adapted to measure quantumly entangled atoms, they would not only keep better time, but they could help decipher signals in the universe such as dark matter and gravitational waves, and start to answer some age-old questions.

"As the universe ages, does the speed of light change? Does the charge of the electron change?" Vuletic says. "That's what you can probe with more precise atomic clocks."

Entanglement on an optical atomic-clock transition. Nature, 2020 DOI: 10.1038/s41586-020-3006-1

Read me...

Sci-advent - How does the brain manage its learning?

The famous patient Henry Molaison (long known as H.M.) suffered damage to his hippocampus after a surgical attempt to cure his epilepsy. As a result, he had anterograde amnesia, which meant that things he learned never made it past his short-term memory. Though his memories of childhood remained intact, H.M. might meet with his doctor and five minutes later say, 'Oh, I don't think I've ever met you. What's your name?'

H.M. helped scientists understand the role of the hippocampus in learning, but a mystery remains around how signals from it somehow get shared with the billions of neurons throughout the cortex that change in a coordinated fashion when we learn. In a paper published today in the journal Science, a collaboration between University of Ottawa and Humbolt University of Berlin reveals a critical role for a brain area called the perirhinal cortex in managing this learning process.

The study involved mice and rats learning a rather strange brain-based skill. A single neuron in the sensory cortex was stimulated, and the rodent had to show it had felt the buzz by licking a dispenser to receive some sweetened water. No one can say for sure what that brain stimulation feels like for the animal, but the team's best guess is that it mimics the feeling of something touching its whiskers.

As they watched the brain responding to this learning experience, the team observed that the perirhinal cortex was serving as a waystation between the nearby hippocampus, which processes place and context, and the outer layer of the cortex.

"The perirhinal cortex happens to be at the very top of the hierarchy of processing of information in the cortex. It accumulates information from multiple senses and then sends it back to the rest of the cortex," says Dr. Richard Naud, an assistant professor in the Faculty of Medicine's Department of Cellular and Molecular Medicine, and in the Brain and Mind Research Institute. "What we are showing is that it has a very important role in coordinating learning. Without these projections coming back from the conceptual area, the animals are not able to learn anymore."

Previous studies have focused on communication from the hippocampus upward into the decision-making regions of the brain like the perirhinal cortex, but there has not been as much attention paid to what the perirhinal cortex does with that information, and what it sends back down to Layer 1 of the cortex. It turns out this step is a key part of the process, without which learning is impossible.

"When the connection from the perirhinal cortex back to those layer 1 neurons was cut, the animals acted a lot like H.M. They were improving a little bit, but it wouldn't stick. They would just learn and forget, learn and forget, learn and forget," says Dr. Naud.

A computational neuroscientist with a background in physics, Dr. Naud was responsible for statistical analyses, as well as the creation of computational models that map out the brain's information processing. Of particular interest to him was confirmation of what he had long suspected: that rapid bursts of firing from a neuron have a distinctive meaning, apart from what is meant by a slower pace of electrical activity. When the animals were in the midst of learning, these rapid-fire action potentials lit up the monitored cells.

The team was able to recreate the burst effect artificially as well.

"If you force the same number of action potentials but at a high frequency, then the animal is better at detecting it," says Dr. Naud. "This would imply that bursts are correlated with learning and causally related to perception. Meaning that you are more likely to perceive something if it creates a burst in your neurons."

The next challenge is to figure out exactly what that learning signal from the perirhinal cortex to the lower order brain areas looks like. Dr. Naud is busy working on a computational model relating our existing knowledge of physiology to what this experiment is seeing.

Perirhinal input to neocortical layer 1 controls learning. Science, 2020 DOI: 10.1126/science.aaz3136

Read me...

Sci-Advent - Tiny quantum computer solves real optimization problem

This is a reblog of an article in ScienceDaily. See the original here.

Quantum computers have already managed to surpass ordinary computers in solving certain tasks -- unfortunately, totally useless ones. The next milestone is to get them to do useful things. Researchers at Chalmers University of Technology, Sweden, have now shown that they can solve a small part of a real logistics problem with their small, but well-functioning quantum computer.

Interest in building quantum computers has gained considerable momentum in recent years, and feverish work is underway in many parts of the world. In 2019, Google's research team made a major breakthrough when their quantum computer managed to solve a task far more quickly than the world's best supercomputer. The downside is that the solved task had no practical use whatsoever -- it was chosen because it was judged to be easy to solve for a quantum computer, yet very difficult for a conventional computer.

Therefore, an important task is now to find useful, relevant problems that are beyond the reach of ordinary computers, but which a relatively small quantum computer could solve.

"We want to be sure that the quantum computer we are developing can help solve relevant problems early on. Therefore, we work in close collaboration with industrial companies," says theoretical physicist Giulia Ferrini, one of the leaders of Chalmers University of Technology's quantum computer project, which began in 2018.

Together with Göran Johansson, Giulia Ferrini led the theoretical work when a team of researchers at Chalmers, including an industrial doctoral student from the aviation logistics company Jeppesen, recently showed that a quantum computer can solve an instance of a real problem in the aviation industry.

The algorithm proven on two qubits All airlines are faced with scheduling problems. For example, assigning individual aircraft to different routes represents an optimisation problem, one that grows very rapidly in size and complexity as the number of routes and aircraft increases.

Researchers hope that quantum computers will eventually be better at handling such problems than today's computers. The basic building block of the quantum computer -- the qubit -- is based on completely different principles than the building blocks of today's computers, allowing them to handle enormous amounts of information with relatively few qubits.

However, due to their different structure and function, quantum computers must be programmed in other ways than conventional computers. One proposed algorithm that is believed to be useful on early quantum computers is the so-called Quantum Approximate Optimization Algorithm (QAOA).

The Chalmers research team has now successfully executed said algorithm on their quantum computer -- a processor with two qubits -- and they showed that it can successfully solve the problem of assigning aircraft to routes. In this first demonstration, the result could be easily verified as the scale was very small -- it involved only two airplanes.

Potential to handle many aircraft With this feat, the researchers were first to show that the QAOA algorithm can solve the problem of assigning aircraft to routes in practice. They also managed to run the algorithm one level further than anyone before, an achievement that requires very good hardware and accurate control.

"We have shown that we have the ability to map relevant problems onto our quantum processor. We still have a small number of qubits, but they work well. Our plan has been to first make everything work very well on a small scale, before scaling up," says Jonas Bylander, senior researcher responsible for the experimental design, and one of the leaders of the project of building a quantum computer at Chalmers.

The theorists in the research team also simulated solving the same optimisation problem for up to 278 aircraft, which would require a quantum computer with 25 qubits.

"The results remained good as we scaled up. This suggests that the QAOA algorithm has the potential to solve this type of problem at even larger scales," says Giulia Ferrini.

Surpassing today's best computers would, however, require much larger devices. The researchers at Chalmers have now begun scaling up and are currently working with five quantum bits. The plan is to reach at least 20 qubits by 2021 while maintaining the high quality.

Applying the Quantum Approximate Optimization Algorithm to the Tail-Assignment ProblemPhysical Review Applied, 2020; 14 (3) DOI: 10.1103/PhysRevApplied.14.034009

Read me...

Sci-Advent - Promising Anti-Inflammatory to Threat Obesity and Alzheimer’s

Researchers at the Institute of Biotechnology (IBt) of Universidad Autónoma de México (UNAM) have created a promising plant-based anti-inflammatory to treat obesity and Alzheimer's.

Using extracts from the Malva parviflora the researchers have shown that the drug is effective in mice to combat the inflammatory process that occurs in these chronic degenerative diseases.

Inflammation is a natural response of the body to different pathogens. It helps to create an adequate immune response and it can also help repair tissue damaged by trauma.

This process is essential for the body to return to homeostasis (self-regulation phenomenon) once it has eliminated the pathogen or repaired the tissue. On the other hand, we now we know that low-tone chronic inflammation is a common factor in many chronic degenerative diseases. Martín Gustavo Pedraza Alva, researcher at the Institute of Biotechnology reminds us that it is therefore important to understand this process at a molecular level - how this process begins and how we could regulate it.

Together with Leonor Pérez Martínez, Pedraza Alva are part of the Neuroimmunobiology Consortium in the IBt Department of Molecular Medicine and Bioprocesses, where they use mice with obesity and Alzheimer's for their modelling. The researcher pointed out that they are working with the Malva parviflora plant, from which they prepare an extract that is tested in models of Alzheimer's and obesity.

Administering this hydroalcoholic extract delays the appearance of the marks of the disease. The animals that receive it maintain their cognitive capacity, decrease the accumulation of senile plaques and all the inflammation markers are diminished within the central nervous system, he said.

In mice that were given a high-fat diet, which normally develop insulin resistance and glucose intolerance, the Malva parviflora extract prevented glucose metabolism disorders and maintained their sensitivity to insulin and glucose tolerance.

A Malva parviflora´s fraction prevents the deleterious effects resulting from neuroinflammation. DOI: 10.1016/j.biopha.2019.109349 https://pubmed.ncbi.nlm.nih.gov/31545221/

Read me...

Sci-Advent - Artificial Intelligence, High Performance Computing and Gravitational Waves

In a recent paper published in the ArXiV, researchers have highlighted the advantages that artificial intelligence techniques bring to the research of fields such as astrophysics. They are making their models available and that is always a great thing to see. They mention the use of these techniques to detect binary neutron stars, and to forecast the merger of multi-messenger sources, such as binary neutron stars and neutron star-black hole systems. Here are some highlights from the paper:

Finding new ways to use artificial intelligence (AI) to accelerate the analysis of gravitational wave data, and ensuring the developed models are easily reusable promises to unlock new opportunities in multi-messenger astrophysics (MMA), and to enable wider use, rigorous validation, and sharing of developed models by the community. In this work, we demonstrate how connecting recently deployed DOE and NSF-sponsored cyberinfrastructure allows for new ways to publish models, and to subsequently deploy these models into applications using computing platforms ranging from laptops to high performance computing clusters. We develop a workflow that connects the Data and Learning Hub for Science (DLHub), a repository for publishing machine learning models, with the Hardware Accelerated Learning (HAL) deep learning computing cluster, using funcX as a universal distributed computing service. We then use this workflow to search for binary black hole gravitational wave signals in open source advanced LIGO data. We find that using this workflow, an ensemble of four openly available deep learning models can be run on HAL and process the entire month of August 2017 of advanced LIGO data in just seven minutes, identifying all four binary black hole mergers previously identified in this dataset, and reporting no misclassifications. This approach, which combines advances in AI, distributed computing, and scientific data infrastructure opens new pathways to conduct reproducible, accelerated, data-driven gravitational wave detection.

Research and development of AI models for gravitational wave astrophysics is evolving at a rapid pace. In less than four years, this area of research has evolved from disruptive prototypes into sophisticated AI algorithms that describe the same 4-D signal manifold as traditional gravitational wave detection pipelines for binary black hole mergers, namely, quasi-circular, spinning, non- precessing, binary systems; have the same sensitivity as template matching algorithms; and are orders of magnitude faster, at a fraction of the computational cost.

AI models have been proven to effectively identify real gravitational wave signals in advanced LIGO data, including binary black hole and neutron stars mergers. The current pace of progress makes it clear that the broader community will continue to advance the development of AI tools to realize the science goals of Multi-Messenger Astrophysics.

Furthermore, mirroring the successful approach of corporations leading AI innovation in industry and technology, we are releasing our AI models to enable the broader community to use and perfect them. This approach is also helpful to address healthy and constructive skepticism from members of the community who do not feel at ease using AI algorithms.

Read me...

Sci-Advent - Artificial intelligence improves control of powerful plasma accelerators

This is a reblog of the post by Hayley Dunning in the Imperial College website. See the original here.

Researchers have used AI to control beams for the next generation of smaller, cheaper accelerators for research, medical and industrial applications.

Electrons are ejected from the plasma accelerator at almost the speed of light, before being passed through a magnetic field which separates the particles by their energy. They are then fired at a fluorescent screen, shown here

Experiments led by Imperial College London researchers, using the Science and Technology Facilities Council’s Central Laser Facility (CLF), showed that an algorithm was able to tune the complex parameters involved in controlling the next generation of plasma-based particle accelerators.

The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerator facilities under construction within the UK and worldwide.Dr Rob Shalloo

The algorithm was able to optimize the accelerator much more quickly than a human operator, and could even outperform experiments on similar laser systems.

These accelerators focus the energy of the world’s most powerful lasers down to a spot the size of a skin cell, producing electrons and x-rays with equipment a fraction of the size of conventional accelerators.

The electrons and x-rays can be used for scientific research, such as probing the atomic structure of materials; in industrial applications, such as for producing consumer electronics and vulcanised rubber for car tyres; and could also be used in medical applications, such as cancer treatments and medical imaging.

Broadening accessibility

Several facilities using these new accelerators are in various stages of planning and construction around the world, including the CLF’s Extreme Photonics Applications Centre (EPAC) in the UK, and the new discovery could help them work at their best in the future. The results are published today in Nature Communications.

First author Dr Rob Shalloo, who completed the work at Imperial and is now at the accelerator centre DESY, said: “The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerator facilities under construction within the UK and worldwide.

“Plasma accelerator technology provides uniquely short bursts of electrons and x-rays, which are already finding uses in many areas of scientific study. With our developments, we hope to broaden accessibility to these compact accelerators, allowing scientists in other disciplines and those wishing to use these machines for applications, to benefit from the technology without being an expert in plasma accelerators.”

The outside of the vacuum chamber

First of its kind

The team worked with laser wakefield accelerators. These combine the world’s most powerful lasers with a source of plasma (ionised gas) to create concentrated beams of electrons and x-rays. Traditional accelerators need hundreds of metres to kilometres to accelerate electrons, but wakefield accelerators can manage the same acceleration within the space of millimetres, drastically reducing the size and cost of the equipment.

However, because wakefield accelerators operate in the extreme conditions created when lasers are combined with plasma, they can be difficult to control and optimise to get the best performance. In wakefield acceleration, an ultrashort laser pulse is driven into plasma, creating a wave that is used to accelerate electrons. Both the laser and plasma have several parameters that can be tweaked to control the interaction, such as the shape and intensity of the laser pulse, or the density and length of the plasma.

While a human operator can tweak these parameters, it is difficult to know how to optimise so many parameters at once. Instead, the team turned to artificial intelligence, creating a machine learning algorithm to optimise the performance of the accelerator.

The algorithm set up to six parameters controlling the laser and plasma, fired the laser, analysed the data, and re-set the parameters, performing this loop many times in succession until the optimal parameter configuration was reached.

Lead researcher Dr Matthew Streeter, who completed the work at Imperial and is now at Queen’s University Belfast, said: “Our work resulted in an autonomous plasma accelerator, the first of its kind. As well as allowing us to efficiently optimise the accelerator, it also simplifies their operation and allows us to spend more of our efforts on exploring the fundamental physics behind these extreme machines.”

Future designs and further improvements

The team demonstrated their technique using the Gemini laser systemat the CLF, and have already begun to use it in further experiments to probe the atomic structure of materials in extreme conditions and in studying antimatter and quantum physics.

The data gathered during the optimisation process also provided new insight into the dynamics of the laser-plasma interaction inside the accelerator, potentially informing future designs to further improve accelerator performance.

The experiment was led by Imperial College London researchers with a team of collaborators from the Science and Technology Facilities Council (STFC), the York Plasma Institute, the University of Michigan, the University of Oxford and the Deutsches Elektronen-Synchrotron (DESY). It was funded by the UK’s STFC, the EU Horizon 2020 research and innovation programme, the US National Science Foundation and the UK’s Engineering and Physical Sciences Research Council.

Automation and control of laser wakefield accelerators using Bayesian optimisation’ by R.J. Shalloo et al. is published in Nature Communications.

Read me...

SciAdvent - Machine Learning in Ear, Nose and Throat

This is a reblog of the article by Cian Hughes and Sumit Agrawal in ENTNews. See the original here.

Figure 1. (Left) CT scan of the right temporal bone. (Middle) Structures of the temporal bone automatically segmented using a TensorFlow based deep learning algorithm. (Right) Three-dimensional model of the critical structures of the temporal bone to be used for surgical planning and simulation. 
Images courtesy of the Auditory Biophysics Laboratory, Western University, London, Canada.

Machine learning in healthcare

Over the last five years there have been significant advances in high performance computing that have led to enormous scientific breakthroughs in the field of machine learning (a form of artificial intelligence), especially with regard to image processing and data analysis. 

These breakthroughs now affect multiple aspects of our lives, from the way our phone sorts and recognises photographs, to automated translation and transcription services, and have the potential to revolutionise the practice of medicine.

The most promising form of artificial intelligence used in medical applications today is deep learning. Deep learning is a type of machine learning in which deep neural networks are trained to identify patterns in data [1]. A common form of neural network used in image processing is a convolutional neural network (CNN). Initially developed for general-purpose visual recognition, it has shown considerable promise in, for instance, the detection and classification of disease on medical imaging.

“Machine learning algorithms have also been central to the development of multiple assistive technologies that can help patients to overcome or alleviate disabilities”

Automated image segmentation has numerous clinical applications, ranging from quantitative measurement of tissue volume, through surgical planning/guidance, medical education and even cancer treatment planning. It is hoped that such advances in automated data analysis will help in the delivery of more timely care, and alleviate workforce shortages in areas such as breast cancer screening [2], where patient demand for screening already outstrips the availability of specialist breast radiologists in many parts of the world.

Applications in otolaryngology

Artificial intelligence is quickly making its way into [our] specialty. Both otolaryngologists and audiologists will soon be incorporating this technology into their clinical practices. Machine learning has been used to automatically classify auditory brainstem responses [8] and estimate audiometric thresholds [9]. This has allowed for accurate online testing [10], which could be used for rural and remote areas without access to standard audiometry (see the article by Dr Matthew Bromwich here).

Machine learning algorithms have also been central to the development of multiple assistive technologies that can help patients to overcome or alleviate disabilities. For example, in the context of hearing loss, significant advances in automated transcription apps, driven by machine learning algorithms, have proven particularly useful in recent months for patients who find themselves unable to lipread due to the use of face coverings to prevent the spread of COVID-19.

Figure 2. The virtual reality simulator CardinalSim (https://cardinalsim.stanford.edu/) depicting 
a left mastoidectomy and facial recess approach. The facial nerve (yellow) and round window 
(blue) were automatically delineated using deep learning techniques. 
Image courtesy of the Auditory Biophysics Laboratory, Western University, London, Canada

In addition to their role in general image classification, CNNs are likely to play a significant role in the introduction of machine learning in healthcare, especially in image-heavy specialties such as otolaryngology. For otologists, deep learning algorithms can already identify detailed temporal bone structures from CT images [3-6], segment intracochlear anatomy [7], and identify individual cochlear implant electrodes [8] (Figure 1); automatic analysis of critical structures on temporal bone scans have already facilitated patient-specific virtual reality otologic surgery [9] (Figure 2). Deep learning will likely also be critical in customised cochlear implant programming in the future.

“Automatic analysis of critical structures on temporal bone scans have already facilitated patient-specific virtual reality otologic surgery”

Convolutional neural networks have also been used in rhinology to automatically delineate critical anatomy and quantify sinus opacification [10-12]. Deep learning networks have been used in head and neck oncology to automatically segment anatomic structures to accelerate radiotherapy planning [13-18]. For laryngologists, voice analysis software will likely incorporate machine learning classifiers to identify pathology as it has been shown to perform better than traditional rule-based algorithms [19].

Figure 3. Automated segmentation of organs at risk of damage from radiation during radiotherapy 
for head and neck cancer. Five axial slices from the scan of a 58-year-old male patient with a cancer 
of the right tonsil selected from the Head-Neck Cetuximab trial dataset (patient 0522c0416) [20,21]. 
Adapted with permission from the original authors [13].


In summary, artificial intelligence and, in particular, deep learning algorithms will radically change the way we manage patients within our careers. Although developed in high-resource settings, the technology has equally significant applications in low-resource settings to facilitate quality care even in the presence of limited human resources.

“Although developed in high-resource settings, the technology has equally significant applications in low-resource settings to facilitate quality care even in the presence of limited human resources”


1. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell2013;35:1798-828. 
2. McKinney SM, Sieniek M, Shetty S. International evaluation of an AI system for breast cancer screening. Nature2020;577:89-94. 
3. Heutink F, Kock V, Verbist B, et al. Multi-Scale deep learning framework for cochlea localization, segmentation and analysis on clinical ultra-high-resolution CT images. Comput Methods Programs Biomed 2020;191:105387. 
4. Fauser J, Stenin I, Bauer M, et al. Toward an automatic preoperative pipeline for image-guided temporal bone surgery. Int J Comput Assist Radiol Surg 2019;14(6):967-76. 
5. Zhang D, Wang J, Noble JH, et al. Deep convolutional neural networks for accurate classification and multi-landmark localization of head CTs. Med Image Anal 2020;61:101659.
6. Nikan S, van Osch K, Bartling M, et al. PWD-3DNet: A deep learning-based fully-automated segmentation of multiple structures on temporal bone CT scans. Submitted to IEEE Trans Image Process.
7. Wang J, Noble JH, Dawant BM. Metal Artifact Reduction and Intra Cochlear Anatomy Segmentation Inct Images of the Ear With A Multi-Resolution Multi-Task 3D Network. IEEE 17th International Symposium on Biomedical Imaging (ISBI) 2020;596-9. 
8. Chi Y, Wang J, Zhao Y, et al. A Deep-Learning-Based Method for the Localization of Cochlear Implant Electrodes in CT Images. IEEE 16th International Symposium on Biomedical Imaging (ISBI) 2019;1141-5. 
9. Compton EC, et al. Assessment of a virtual reality temporal bone surgical simulator: a national face and content validity study. J Otolaryngol Head Neck Surg 2020;49:17. 
10. Laura CO, Hofmann P, Drechsler K, Wesarg S. Automatic Detection of the Nasal Cavities and Paranasal Sinuses Using Deep Neural Networks. IEEE 16th International Symposium on Biomedical Imaging (ISBI) 2019;1154-7. 
11. Iwamoto Y, Xiong K, Kitamura T, et al. Automatic Segmentation of the Paranasal Sinus from Computer Tomography Images Using a Probabilistic Atlas and a Fully Convolutional Network. Conf Proc IEEE Eng Med Biol Soc 2019;2789-92. 
12. Humphries SM, Centeno JP, Notary AM, et al. Volumetric assessment of paranasal sinus opacification on computed tomography can be automated using a convolutional neural network. Int Forum Allergy Rhinol 2020. 
13. Nikolov S, Blackwell S, Mendes R, et al. Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy. arXiv [cs.CV] 2018. 
14. Tong N, Gou S, Yang, S, et al. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys 2018;45;4558-67. 
15. Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks.Med Phys 2017;44:547-57. 
16. Vrtovec T, Močnik D, Strojan P, et al. B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: from atlas-based to deep learning methods. Med Phys 2020.
17. Zhu W, Huang Y, Zeng L. et al. AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med Phys 2019;46(2):576-89. 
18. Tong N, Gou S, Yang S, et al. Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images. Med Phys 2019;46:2669-82. 
19. Cesari U, De Pietro G, Marciano E, et al. Voice Disorder Detection via an m-Health System: Design and Results of a Clinical Study to Evaluate Vox4Health. Biomed Res Int 2018;8193694. 
20. Bosch WR, Straube WL, Matthews JW, Purdy JA. Data From Head-Neck_Cetuximab 2015. 
21. Clark K, Vendt B, Smith K, et al. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 2013;26:1045-5.

Read me...

Sci-Advent - Artificial intelligence helps scientists develop new general models in ecology

The automation of scientific discoveries is here to stay. Among others, a machine-human cooperation found a hitherto unknown general model explaining the relation between the area and age of an island and the number of species it hosts.

In ecology, millions of species interact in billions of different ways between them and with their environment. Ecosystems often seem chaotic, or at least overwhelming for someone trying to understand them and make predictions for the future.

Artificial intelligence and machine learning are able to detect patterns and predict outcomes in ways that often resemble human reasoning. They pave the way to increasingly powerful cooperation between humans and computers.

Within AI, evolutionary computation methods replicate in some sense the processes of evolution of species in the natural world. A particular method called symbolic regression allows the evolution of human-interpretable formulas that explain natural laws.

"We used symbolic regression to demonstrate that computers are able to derive formulas that represent the way ecosystems or species behave in space and time. These formulas are also easy to understand. They pave the way for general rules in ecology, something that most methods in AI cannot do," says Pedro Cardoso, curator at the Finnish Museum of Natural History, University of Helsinki.

With the help of the symbolic regression method, an interdisciplinary team from Finland, Portugal, and France was able to explain why some species exist in some regions and not in others, and why some regions have more species than others.

The researchers were able, for example, to find a new general model that explains why some islands have more species than others. Oceanic islands have a natural life-cycle, emerging from volcanoes and eventually submerging with erosion after millions of years. With no human input, the algorithm was able to find that the number of species of an island increases with the island age and peaks with intermediate ages, when erosion is still low.

"The explanation was known, a couple of formulas already existed, but we were able to find new ones that outperform the existing ones under certain circumstances," says Vasco Branco, PhD student working on the automation of extinction risk assessments at the University of Helsinki.

The research proposes that explainable artificial intelligence is a field to explore and promotes the cooperation between humans and machines in ways that are only now starting to scratch the surface.

"Evolving free-form equations purely from data, often without prior human inference or hypotheses, may represent a very powerful tool in the arsenal of a discipline as complex as ecology," says Luis Correia, computer science professor at the University of Lisbon.

Automated Discovery of Relationships, Models, and Principles in EcologyFrontiers in Ecology and Evolution, 2020; 8 DOI: 10.3389/fevo.2020.530135

Read me...

Sci-Advent - Significant step toward quantum advantage

Optimised quantum algorithms present solution to Fermi-Hubbard model on near-term hardware

This a reblog of an article in Science Daily. See the original here.

The team, led by Bristol researcher and Phasecraft co-founder, Dr. Ashley Montanaro, has discovered algorithms and analysis which significantly lessen the quantum hardware capability needed to solve problems which go beyond the realm of classical computing, even supercomputers.

In the paper, published in Physical Review B, the team demonstrates how optimised quantum algorithms can solve the notorious Fermi-Hubbard model on near-term hardware.

The Fermi-Hubbard model is of fundamental importance in condensed-matter physics as a model for strongly correlated materials and a route to understanding high-temperature superconductivity.

Finding the ground state of the Fermi-Hubbard model has been predicted to be one of the first applications of near-term quantum computers, and one that offers a pathway to understanding and developing novel materials.

Dr. Ashley Montanaro, research lead and cofounder of Phasecraft: "Quantum computing has critically important applications in materials science and other domains. Despite the major quantum hardware advances recently, we may still be several years from having the right software and hardware to solve meaningful problems with quantum computing. Our research focuses on algorithms and software optimisations to maximise the quantum hardware's capacity, and bring quantum computing closer to reality.

"Near-term quantum hardware will have limited device and computation size. Phasecraft applied new theoretical ideas and numerical experiments to put together a very comprehensive study on different strategies for solving the Fermi-Hubbard model, zeroing in on strategies that are most likely to have the best results and impact in the near future.

"The results suggest that optimising over quantum circuits with a gate depth substantially less than a thousand could be sufficient to solve instances of the Fermi-Hubbard model beyond the capacity of a supercomputer. This new research shows significant promise for the capabilities of near-term quantum devices, improving on previous research findings by around a factor of 10."

Physical Review B, published by the American Physical Society, is the top specialist journal in condensed-matter physics. The peer-reviewed research paper was also chosen as the Editors' Suggestion and to appear in Physics magazine.

Andrew Childs, Professor in the Department of Computer Science and Institute for Advanced Computer Studies at the University of Maryland: "The Fermi-Hubbard model is a major challenge in condensed-matter physics, and the Phasecraft team has made impressive steps in showing how quantum computers could solve it. Their work suggests that surprisingly low-depth circuits could provide useful information about this model, making it more accessible to realistic quantum hardware."

Hartmut Neven, Head of Quantum Artificial Intelligence Lab, Google: "Sooner or later, quantum computing is coming. Developing the algorithms and technology to power the first commercial applications of early quantum computing hardware is the toughest challenge facing the field, which few are willing to take on. We are proud to be partners with Phasecraft, a team that are developing advances in quantum software that could shorten that timeframe by years."

Phasecraft Founder Dr. Toby Cubitt: "At Phasecraft, our team of leading quantum theorists have been researching and applying quantum theory for decades, leading some of the top global academic teams and research in the field. Today, Ashley and his team have demonstrated ways to get closer to achieving new possibilities that exist just beyond today's technological bounds."

Phasecraft has closed a record seed round for a quantum company in the UK with £3.7m in funding from private-sector VC investors, led by LocalGlobe with Episode1 along with previous investors. Former Songkick founder Ian Hogarth has also joined as board chair for Phasecraft. Phasecraft previously raised a £750,000 pre-seed round led by UCL Technology Fund with Parkwalk Advisors and London Co-investment Fund and has earned several grants facilitated by InnovateUK. Between equity funding and research grants, Phasecraft has raised more than £5.5m.

Dr Toby Cubitt: "With new funding and support, we are able to continue our pioneering research and industry collaborations to develop the quantum computing industry and find useful applications faster."

Read me...

Sci-Advent - Aztec skull tower: Archaeologists unearth new sections in Mexico City

This is a reblog of the article by in the BBC. See the original here.

Archaeologists have excavated more sections of an extraordinary Aztec tower of human skulls under the centre of Mexico City.

Mexico's National Institute of Anthropology and History (INAH) said a further 119 skulls had been uncovered. The tower was discovered in 2015 during the restoration of a building in the Mexican capital.

It is believed to be part of a skull rack from the temple to the Aztec god of the sun, war and human sacrifice. Known as the Huey Tzompantli, the skull rack stood on the corner of the chapel of Huitzilopochtli, the patron of the Aztec capital Tenochtitlan.

The Aztecs were a group of Nahuatl-speaking peoples that dominated large parts of central Mexico from the 14th to the 16th centuries. Their empire was overthrown by invaders led by the Spanish conquistador Hernán Cortés, who captured Tenochtitlan in 1521.

A similar structure to the Huey Tzompantli struck fear in the soldiers accompanying the Spanish conqueror when they invaded the city. The cylindrical structure is near the huge Metropolitan Cathedral built over the Templo Mayor, one of the main temples of Tenochtitlan, now modern day Mexico City.

"The Templo Mayor continues to surprise us, and the Huey Tzompantli is without doubt one of the most impressive archaeological finds of recent years in our country," Mexican Culture Minister Alejandra Frausto said.

Archaeologists have identified three construction phases of the tower, which dates back to between 1486 and 1502. The tower's original discovery surprised anthropologists, who had been expecting to find the skulls of young male warriors, but also unearthed the crania of women and children, raising questions about human sacrifice in the Aztec Empire.

"Although we can't say how many of these individuals were warriors, perhaps some were captives destined for sacrificial ceremonies," said archaeologist Raul Barrera.

"We do know that they were all made sacred," he added. "Turned into gifts for the gods or even personifications of deities themselves."

Read me...

person holding black and silver hand tool

Sci-Advent - Getting the right grip: Designing soft and sensitive robotic fingers

person holding black and silver hand tool
Photo by C Technical on Pexels.com

To develop a more human-like robotic gripper, it is necessary to provide sensing capabilities to the fingers. However, conventional sensors compromise the mechanical properties of soft robots. Now, scientists have designed a 3D printable soft robotic finger containing a built-in sensor with adjustable stiffness. Their work represents a big step toward safer and more dexterous robotic handling, which will extend the applications of robots to fields such as health and elderly care.

Although robotics has reshaped and even redefined many industrial sectors, there still exists a gap between machines and humans in fields such as health and elderly care. For robots to safely manipulate or interact with fragile objects and living organisms, new strategies to enhance their perception while making their parts softer are needed. In fact, building a safe and dexterous robotic gripper with human-like capabilities is currently one of the most important goals in robotics.

One of the main challenges in the design of soft robotic grippers is integrating traditional sensors onto the robot's fingers. Ideally, a soft gripper should have what's known as proprioception -- a sense of its own movements and position -- to be able to safely execute varied tasks. However, traditional sensors are rigid and compromise the mechanical characteristics of the soft parts. Moreover, existing soft grippers are usually designed with a single type of proprioceptive sensation; either pressure or finger curvature.

To overcome these limitations, scientists at Ritsumeikan University, Japan, have been working on novel soft gripper designs under the lead of Associate Professor Mengying Xie. In their latest study published in Nano Energy, they successfully used multimaterial 3D printing technology to fabricate soft robotic fingers with a built-in proprioception sensor. Their design strategy offers numerous advantages and represents a large step toward safer and more capable soft robots.

The soft finger has a reinforced inflation chamber that makes it bend in a highly controllable way according to the input air pressure. In addition, the stiffness of the finger is also tunable by creating a vacuum in a separate chamber. This was achieved through a mechanism called vacuum jamming, by which multiple stacked layers of a bendable material can be made rigid by sucking out the air between them. Both functions combined enable a three-finger robotic gripper to properly grasp and maintain hold of any object by ensuring the necessary force is applied.

Most notable, however, is that a single piezoelectric layer was included among the vacuum jamming layers as a sensor. The piezoelectric effect produces a voltage difference when the material is under pressure. The scientists leveraged this phenomenon as a sensing mechanism for the robotic finger, providing a simple way to sense both its curvature and initial stiffness (prior to vacuum adjustment). They further enhanced the finger's sensitivity by including a microstructured layer among the jamming layers to improve the distribution of pressure on the piezoelectric material.

The use of multimaterial 3D printing, a simple and fast prototyping process, allowed the researchers to easily integrate the sensing and stiffness-tuning mechanisms into the design of the robotic finger itself. "Our work suggests a way of designing sensors that contribute not only as sensing elements for robotic applications, but also as active functional materials to provide better control of the whole system without compromising its dynamic behavior," says Prof Xie. Another remarkable feature of their design is that the sensor is self-powered by the piezoelectric effect, meaning that it requires no energy supply -- essential for low-power applications.

Overall, this exciting new study will help future researchers find new ways of improving how soft grippers interact with and sense the objects being manipulated. In turn, this will greatly expand the uses of robots, as Prof Xie indicates: "Self-powered built-in sensors will not only allow robots to safely interact with humans and their environment, but also eliminate the barriers to robotic applications that currently rely on powered sensors to monitor conditions."

Let's hope this technology is further developed so that our mechanical friends can soon join us in many more human activities!

Flexible self-powered multifunctional sensor for stiffness-tunable soft robotic gripper by multimaterial 3D printingNano Energy, 2021; 79: 105438 DOI: 10.1016/j.nanoen.2020.105438

Read me...

Sci-Advent - 'Electronic amoeba' finds approximate solution to traveling salesman problem in linear time

Researchers at Hokkaido University and Amoeba Energy in Japan have, inspired by the efficient foraging behavior of a single-celled amoeba, developed an analog computer for finding a reliable and swift solution to the traveling salesman problem -- a representative combinatorial optimization problem.

Amoeba-inspired analog electronic computing system integrating resistance crossbar for solving the travelling salesman problem. Scientific Reports, 2020; 10 (1) DOI: 10.1038/s41598-020-77617-7

Many real-world application tasks such as planning and scheduling in logistics and automation are mathematically formulated as combinatorial optimization problems. Conventional digital computers, including supercomputers, are inadequate to solve these complex problems in practically permissible time as the number of candidate solutions they need to evaluate increases exponentially with the problem size -- also known as combinatorial explosion. Thus new computers called "Ising machines," including "quantum annealers," have been actively developed in recent years. These machines, however, require complicated pre-processing to convert each task to the form they can handle and have a risk of presenting illegal solutions that do not meet some constraints and requests, resulting in major obstacles to the practical applications.

These obstacles can be avoided using the newly developed "electronic amoeba," an analog computer inspired by a single-celled amoeboid organism. The amoeba is known to maximize nutrient acquisition efficiently by deforming its body. It has shown to find an approximate solution to the traveling salesman problem (TSP), i.e., given a map of a certain number of cities, the problem is to find the shortest route for visiting each city exactly once and returning to the starting city. This finding inspired Professor Seiya Kasai at Hokkaido University to mimic the dynamics of the amoeba electronically using an analog circuit, as described in the journal Scientific Reports. "The amoeba core searches for a solution under the electronic environment where resistance values at intersections of crossbars represent constraints and requests of the TSP," says Kasai. Using the crossbars, the city layout can be easily altered by updating the resistance values without complicated pre-processing.

Kenta Saito, a PhD student in Kasai's lab, fabricated the circuit on a breadboard and succeeded in finding the shortest route for the 4-city TSP. He evaluated the performance for larger-sized problems using a circuit simulator. Then the circuit reliably found a high-quality legal solution with a significantly shorter route length than the average length obtained by the random sampling. Moreover, the time required to find a high-quality legal solution grew only linearly to the numbers of cities. Comparing the search time with a representative TSP algorithm "2-opt," the electronic amoeba becomes more advantageous as the number of cities increases. "The analog circuit reproduces well the unique and efficient optimization capability of the amoeba, which the organism has acquired through natural selection," says Kasai.

"As the analog computer consists of a simple and compact circuit, it can tackle many real-world problems in which inputs, constraints, and requests dynamically change and can be embedded into IoT devices as a power-saving microchip," says Masashi Aono who leads Amoeba Energy to promote the practical use of the amoeba-inspired computers.

This is a Joint Release between Hokkaido University and Amoeba Energy Co., Ltd. More information

Read me...

Sci-Advent - New superhighway system discovered in the Solar System

Researchers have discovered a new superhighway network to travel through the Solar System much faster than was previously possible. Such routes can drive comets and asteroids near Jupiter to Neptune's distance in under a decade and to 100 astronomical units in less than a century. They could be used to send spacecraft to the far reaches of our planetary system relatively fast, and to monitor and understand near-Earth objects that might collide with our planet.

The arches of chaos in the Solar System. Science Advances, 2020; 6 (48): eabd1313 DOI: 10.1126/sciadv.abd1313

In their paper, published in the Nov. 25 issue of Science Advances, the researchers observed the dynamical structure of these routes, forming a connected series of arches inside what's known as space manifolds that extend from the asteroid belt to Uranus and beyond. This newly discovered "celestial autobahn" or "celestial highway" acts over several decades, as opposed to the hundreds of thousands or millions of years that usually characterize Solar System dynamics.

The most conspicuous arch structures are linked to Jupiter and the strong gravitational forces it exerts. The population of Jupiter-family comets (comets having orbital periods of 20 years) as well as small-size solar system bodies known as Centaurs, are controlled by such manifolds on unprecedented time scales. Some of these bodies will end up colliding with Jupiter or being ejected from the Solar System.

The structures were resolved by gathering numerical data about millions of orbits in our Solar System and computing how these orbits fit within already-known space manifolds. The results need to be studied further, both to determine how they could be used by spacecraft, or how such manifolds behave in the vicinity of the Earth, controlling the asteroid and meteorite encounters, as well as the growing population of artificial human-made objects in the Earth-Moon system.


Read me...

Sci-Advent - Trends in prevalence of blindness and distance and near vision impairment over 30 years

Keeping up with the Sci-advent post from yesterday about vision and optics, this report from the University of Michigan is relevant news. Researchers say eye care accessibility around the globe isn't keeping up with an aging population, posing challenges for eye care professionals over the next 30 years.

As the global population grows and ages, so does their need for eye care. But according to two new studies published in The Lancet Global Health, these needs aren't being met relative to international targets to reduce avoidable vision loss.

As 2020 comes to a close, an international group of researchers set out to provide updated estimates on the number of people that are blind or visually impaired across the globe, to identify the predominant causes, and to illustrate epidemiological trends over the last 30 years.

"This is important because when we think about setting a public health agenda, knowing the prevalence of an impairment, what causes it, and where in the world it's most common informs the actions that key decision makers like the WHO and ministries of health take to allocate limited resources," says Joshua Ehrlich, M.D., M.P.H., a study author and ophthalmologist at Kellogg Eye Center.

The study team assesses a collection of secondary data every five years, undertaking a meta-analysis of population-based surveys of eye disease assembled by the Vision Loss Expert Group and spanning from 1980 to 2018.

Creating a blueprint

A study like this poses challenges since regional populations vary in age.

"For example, the population in some Asian and European countries is much older on average than the population in many African nations. Many populations are also growing older over time. A direct comparison of the percentage of the population with blindness or vision impairment wouldn't paint a complete picture" says Ehrlich, who is also a member of University of Michigan's Institute for Healthcare Policy and Innovation, explains.

To address this issue, the study looked at age-standardized prevalence, accomplished by adjusting regional populations to fit a standard age structure.

"We found that the age-standardized prevalence is decreasing around the world, which tells us eye care systems and quality of care are getting better," says study author Monte A. Del Monte, M.D., a pediatric ophthalmologist at Kellogg Eye Center. "However, as populations age, a larger number of people are being affected by serious vision impairment, suggesting we need to improve accessibility to care and further develop human resources to provide care."

In fact, the researchers found that there wasn't any significant reduction in the number of people with treatable vision loss in the last ten years, which paled in comparison to the World Health Assembly Global Action Plan target of a 25% global reduction of avoidable vision loss in this same time frame.

Although findings varied by region globally, cataracts and the unmet need for glasses were the most prevalent causes of moderate to severe vision impairment. Approximately 45% of the 33.6 million cases of global blindness were caused by cataracts, which can be treated with surgery.

Refractive error, which causes a blurred image resulting from an abnormal shape of the cornea and lens not bending light correctly, accounted for vision loss in 86 million people across the globe. This largest contributor to moderate or severely impaired vision can be easily treated with glasses.

Also important, vision impairment due to diabetic retinopathy, a complication of diabetes that affects eyesight, was found to have increased in global prevalence.

"This is another condition in which we can prevent vision loss with early screenings and intervention," says study author Alan L. Robin, M.D., a collaborating ophthalmologist at Kellogg Eye Center and professor at Johns Hopkins Medicine. "As diabetes becomes more common across the globe, this condition may begin to affect younger populations, as well."

Looking to 2050

"Working as a global eye care community, we need to now look at the next 30 years," Ehrlich says. "We hope to take these findings and create implementable strategies with our global partners through our Kellogg Eye Center for International Ophthalmology so fewer people go blind unnecessarily."

In an effort to contribute to the WHO initiative VISION 2020: The Right to Sight, the researchers updated estimates of the global burden of vision loss and provided predictions for what the year 2050 may look like.

They found that the majority of the 43.9 million people blind globally are women. Women also make up the majority of the 295 million people who have moderate to severe vision loss, the 163 million who have mild vision loss and the 510 million who have visual impairments related to the unmet need for glasses, specifically poor near vision.

By 2050, Ehrlich, Del Monte, and Robin predict 61 million people will be blind, 474 million will have moderate and severe vision loss, 360 million will have mild vision loss and 866 million will have visual impairments related to farsightedness.

"Eliminating preventable blindness globally isn't keeping pace with the global population's needs," Ehrlich says. "We face enormous challenges in treating and preventing vision impairment as the global population grows and ages, but I'm optimistic of a future where we will succeed because of the measures we take now to make a difference."

Both studies were funded by Brien Holden Vision Institute, Fondation Théa, Fred Hollows Foundation, Bill & Melinda Gates Foundation, Lions Clubs International Foundation, Sightsavers International and the University of Heidelberg.

GBD 2019 Blindness and Vision Impairment Collaborators, on behalf of theVision Loss Expert Group of the Global Burden of Disease Study. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: the Right to Sight: an analysis for the Global Burden of Disease Study. The Lancet Global Health, 2020; DOI: 10.1016/S2214-109X(20)30489-7

Read me...

Sci-Advent - Mathematics and physics help protect the sight of patients with diabetics: Sabino Chávez-Cerda

This is a translation of the article by Antimio Cruz in Crónica. You can read the original in Spanish here. As a former student of Prof. Chávez-Cerda, I am very pleased to see that his research continues getting traction and the recognition it deserves.

Exotic Beam Theory was created by Sabino Chávez-Cerda, but for over four years it was rejected until it gained acceptance.

The Mexican scientist Sabino Chávez-Cerda has received numerous international awards over thirty years for his contributions to understanding one of the most complex phenomena in nature: light. This year, together with one of his graduates and other collaborators from Mexico and England, he presented a model that reproduces with great precision the operation of the flexible lens located behind the iris of the human eye: the crystalline lens. This work was recognised as one of the most important investigations in optics of the year 2020 by the magazine Optics and Photonics News.

Now, in conversation for the readers of Crónica, the researcher from the Instituto National de Asfrofísica, Óptica y Electrónica (INAOE) in Mexico says that we all should be made aware that science is much closer to our daily life than we imagine. For example, his studies are able to help care better for the eyes of patients with diabetes.

“My studies on how light travels have enabled me and my collaborators to make important contributions through the use of the physics and mathematics, the same that everyone is able to learn. I have made new interpretations that at first were rejected and with the availability of more evidence they have ended up being accepted", says the man who created the Theory of Exotic Beams for which he was elected as Fellow member in 2013, one of the most important accolades in the field, by the Optical Society of America (OSA), one of the most prestigious organisations in the world.

“My recent work with the lenses of human eyes began in an interesting way. A few years ago some ophthalmologist surgeons from Puebla, Mexico, invited us to organise a seminar on how light propagates. This was because they had equipment to perform laser surgeries, but they had doubts on the subject aberrations. I then realised that what we were investigating about beams could be of great use in healthcare," says Chávez-Cerda, who since childhood has lived in many cities in Mexico and abroad. He mentions that one of the things he most enjoys is watching the sunset on the shores of the Mexican Pacific.

“I was born in Celaya, Guanajuato, Mexico. My father was an agronomist and we had to move many times. So during my childhood and youth I lived in Nayarit, Veracruz, Guanajuato and Mexico City”, says the physicist and PhD who has also carried out research in England, China, Brazil and the United States.

COMPLEX QUESTIONS - Light is the part of electromagnetic radiation that can be perceived by the human eye and that may have a complex behaviour. It is made up of photons, which have the duality of being a wave and a massless particle. The field of science that studies light, optics, has become so diverse that today it can be compared to a tree with diverse branches: including the study of fibre optics, the use of laser light, non-linear optics and many more. 

“For example, physical optics tries to understand how light travels and how it changes when an obstruction or lens is put on its path. We have all seen when a CD generates a rainbow when placed in front of a light source. This is due to the physical phenomenon called diffraction, just like holograms. What happens there is that the light is ‘spread’ and that is one of the many phenomena that we study ”, details the INAOE researcher whose individual and team work averages around 4 thousand citations in four of the main databases of scientific articles: Web of Science (WoS), Scopus, Research Gate and Google Scholar.

His long academic career is based on his bachelor's degree from the Escuela Superior de Física y Matemáticas (ESFM) of the Instituto Politécnico National (IPN) in Mexico. He later obtained an MSc at Centro the Investigaciones en Óptica (CIO) in León, Guanajuato, Mexico and his PhD in England, at the Imperial College London (IC).

The story of how he created the theory of exotic beams may require a larger, separate text. However, it is worth saying that it started from some reports made in the 1980s by the University of Rochester claiming that it was possible to create beams of light that were not ‘spread’ or did not show diffraction. That caused a stir because it violates the laws of physics and mathematics. When the Dr Chávez-Cerda showed interest in studying the subject his own English supervisors told him that they did not believe it was worth pursuing. He dedicated though several hours to study this and after performing many calculations and computer simulations he was able to find an answer that was not immediately understood by us all: the beams of light that did not ‘spread’ were not beams, but instead apparent beams, resulting from a phenomenon called interference.

“When I proposed this theory, it was rejected for four years. Over and over again they rejected my articles, but I improved and improved my ideas until there was no argument to reject them ”, says the professor who says that since he was young he has treasured two activities that he practiced for many years and that gave him love for discipline and freedom: martial arts and regional dance.

Now, he has received awards such as the annual award from the European Optical Society and the recognition of "Visiting Foreign Researcher of Excellence" by the Government of China. He is also able to boast his graduate students; today scientists who have membership in the National System of Researchers (SNI) in Mexico. [Translator note: and a few of us that are abroad too!!! — Thanks Sabino!]

"The human virtue that I value the most is honesty," says the teacher, husband and father of two adult sons, and two 13-year-old twins. “Throughout my life and my professional experience I have met people who, due to lack of honesty, prevent me from moving in the right direction. That is why I know that when there is honesty, one can advance and everyone can grow a lot,” says the man who remembers the day his mother took him to a new elementary school in Tepic, Mexico where they were rude to both of them". She told me, 'Be the best you can,' and that's when I became good at maths," he shared with Crónica’s readers.

Read me...

Sci-Advent - Writing Reports Tailored for AI Readers

This is a reblog from an article by John Naughton in the Guardian on Dec 5th 2020. Read the original here.

My eye was caught by the title of a working paper published by the National Bureau for Economic Research (NBER): How to Talk When a Machine Is Listening: Corporate Disclosure in the Age of AI. So I clicked and downloaded, as one does. And then started to read.

The paper is an analysis of the 10-K and 10-Q filings that American public companies are obliged to file with the Securities and Exchange Commission (SEC). The 10-K is a version of a company’s annual report, but without the glossy photos and PR hype: a corporate nerd’s delight. It has, says one guide, “the-everything-and-the-kitchen-sink data you can spend hours going through – everything from the geographic source of revenue to the maturity schedule of bonds the company has issued”. Some investors and commentators (yours truly included) find the 10-K impenetrable, but for those who possess the requisite stamina (big companies can have 10-Ks that run to several hundred pages), that’s the kind of thing they like. The 10-Q filing is the 10-K’s quarterly little brother.

The observation that triggered the research reported in the paper was that “mechanical” (ie machine-generated) downloads of corporate 10-K and 10-Q filings increased from 360,861 in 2003 to about 165m in 2016, when 78% of all downloads appear to have been triggered by request from a computer. A good deal of research in AI now goes into assessing how good computers are at extracting actionable meaning from such a tsunami of data. There’s a lot riding on this, because the output of machine-read reports is the feedstock that can drive algorithmic traders, robot investment advisers, and quantitative analysts of all stripes.

The NBER researchers, however, looked at the supply side of the tsunami – how companies have adjusted their language and reporting in order to achieve maximum impact with algorithms that are reading their corporate disclosures. And what they found is instructive for anyone wondering what life in an algorithmically dominated future might be like.

The researchers found that “increasing machine and AI readership … motivates firms to prepare filings that are more friendly to machine parsing and processing”. So far, so predictable. But there’s more: “firms with high expected machine downloads manage textual sentiment and audio emotion in ways catered to machine and AI readers”.

In other words, machine readability – measured in terms of how easily the information can be parsed and processed by an algorithm – has become an important factor in composing company reports. So a table in a report might have a low readability score because its formatting makes it difficult for a machine to recognise it as a table; but the same table could receive a high readability score if it made effective use of tagging.

The researchers contend, though, that companies are now going beyond machine readability to try and adjust the sentiment and tone of their reports in ways that might induce algorithmic “readers” to draw favourable conclusions about the content. They do so by avoiding words that are listed as negative in the criteria given to text-reading algorithms. And they are also adjusting the tones of voice used in the standard quarterly conference calls with analysts, because they suspect those on the other end of the call are using voice analysis software to identify vocal patterns and emotions in their commentary.

In one sense, this kind of arms race is predictable in any human activity where a market edge may be acquired by whoever has better technology. It’s a bit like the war between Google and the so-called “optimisers” who try to figure out how to game the latest version of the search engine’s page ranking algorithm. But at another level, it’s an example of how we are being changed by digital technology – as Brett Frischmann and Evan Selinger argued in their sobering book Re-Engineering Humanity.

After I’d typed that last sentence, I went looking for publication information on the book and found myself trying to log in to a site that, before it would admit me, demanded that I solve a visual puzzle: on an image of a road junction divided into 8 x 4 squares I had to click on all squares that showed traffic lights. I did so, and was immediately presented with another, similar puzzle, which I also dutifully solved, like an obedient monkey in a lab.

And the purpose of this absurd challenge? To convince the computer hosting the site that I was not a robot. It was an inverted Turing test in other words: instead of a machine trying to fool a human into thinking that it was human, I was called upon to convince a computer that I was a human. I was being re-engineered. The road to the future has taken a funny turn.

Read me...

Sci-Advent - Challenges in Deploying Machine Learning: a Survey of Case Studies

This survey paper extracts practical considerations from recent case studies of a variety of ML applications and is organized into sections that correspond to stages of a typical machine learning workflow: from data management and model learning to verification and deployment.

In recent years, machine learning has received increased interest both as an academic research field and as a solution for real-world business problems. However, the deployment of machine learning models in production systems can present a number of issues and concerns. This survey reviews published reports of deploying machine learning solutions in a variety of use cases, industries and applications and extracts practical considerations corresponding to stages of the machine learning deployment workflow. Our survey shows that practitioners face challenges at each stage of the deployment. The goal of this paper is to layout a research agenda to explore approaches addressing these challenges.



Read me...

Sci-Advent - DeepMind’s AI solving protein structures

This is a reblog from the Nature article by Ewen Callway. You can see the original here.

An artificial intelligence (AI) network developed by Google AI offshoot DeepMind has made a gargantuan leap in solving one of biology’s grandest challenges — determining a protein’s 3D shape from its amino-acid sequence.

DeepMind’s program, called AlphaFold, outperformed around 100 other teams in a biennial protein-structure prediction challenge called CASP, short for Critical Assessment of Structure Prediction. The results were announced on 30 November, at the start of the conference — held virtually this year — that takes stock of the exercise.

“This is a big deal,” says John Moult, a computational biologist at the University of Maryland in College Park, who co-founded CASP in 1994 to improve computational methods for accurately predicting protein structures. “In some sense the problem is solved.”

The ability to accurately predict protein structures from their amino-acid sequence would be a huge boon to life sciences and medicine. It would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery.

AlphaFold came top of the table at the last CASP — in 2018, the first year that London-based DeepMind participated. But, this year, the outfit’s deep-learning network was head-and-shoulders above other teams and, say scientists, performed so mind-bogglingly well that it could herald a revolution in biology.

“It’s a game changer,” says Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology in Tübingen, Germany, who assessed the performance of different teams in CASP. AlphaFold has already helped him find the structure of a protein that has vexed his lab for a decade, and he expects it will alter how he works and the questions he tackles. “This will change medicine. It will change research. It will change bioengineering. It will change everything,” Lupas adds.

In some cases, AlphaFold’s structure predictions were indistinguishable from those determined using ‘gold standard’ experimental methods such as X-ray crystallography and, in recent years, cryo-electron microscopy (cryo-EM). AlphaFold might not obviate the need for these laborious and expensive methods — yet — say scientists, but the AI will make it possible to study living things in new ways.

The structure problem

Proteins are the building blocks of life, responsible for most of what happens inside cells. How a protein works and what it does is determined by its 3D shape — ‘structure is function’ is an axiom of molecular biology. Proteins tend to adopt their shape without help, guided only by the laws of physics.

For decades, laboratory experiments have been the main way to get good protein structures. The first complete structures of proteins were determined, starting in the 1950s, using a technique in which X-ray beams are fired at crystallized proteins and the diffracted light translated into a protein’s atomic coordinates. X-ray crystallography has produced the lion’s share of protein structures. But, over the past decade, cryo-EM has become the favoured tool of many structural-biology labs.

Scientists have long wondered how a protein’s constituent parts — a string of different amino acids — map out the many twists and folds of its eventual shape. Early attempts to use computers to predict protein structures in the 1980s and 1990s performed poorly, say researchers. Lofty claims for methods in published papers tended to disintegrate when other scientists applied them to other proteins.

Moult started CASP to bring more rigour to these efforts. The event challenges teams to predict the structures of proteins that have been solved using experimental methods, but for which the structures have not been made public. Moult credits the experiment — he doesn’t call it a competition — with vastly improving the field, by calling time on overhyped claims. “You’re really finding out what looks promising, what works, and what you should walk away from,” he says.

Moult started CASP to bring more rigour to these efforts. The event challenges teams to predict the structures of proteins that have been solved using experimental methods, but for which the structures have not been made public. Moult credits the experiment — he doesn’t call it a competition — with vastly improving the field, by calling time on overhyped claims. “You’re really finding out what looks promising, what works, and what you should walk away from,” he says.

DeepMind’s 2018 performance at CASP13 startled many scientists in the field, which has long been the bastion of small academic groups. But its approach was broadly similar to those of other teams that were applying AI, says Jinbo Xu, a computational biologist at the University of Chicago, Illinois.

The first iteration of AlphaFold applied the AI method known as deep learning to structural and genetic data to predict the distance between pairs of amino acids in a protein. In a second step that does not invoke AI, AlphaFold uses this information to come up with a ‘consensus’ model of what the protein should look like, says John Jumper at DeepMind, who is leading the project.

The team tried to build on that approach but eventually hit the wall. So it changed tack, says Jumper, and developed an AI network that incorporated additional information about the physical and geometric constraints that determine how a protein folds. They also set it a more difficult, task: instead of predicting relationships between amino acids, the network predicts the final structure of a target protein sequence. “It’s a more complex system by quite a bit,” Jumper says.

Startling accuracy

CASP takes place over several months. Target proteins or portions of proteins called domains — about 100 in total — are released on a regular basis and teams have several weeks to submit their structure predictions. A team of independent scientists then assesses the predictions using metrics that gauge how similar a predicted protein is to the experimentally determined structure. The assessors don’t know who is making a prediction.

AlphaFold’s predictions arrived under the name ‘group 427’, but the startling accuracy of many of its entries made them stand out, says Lupas. “I had guessed it was AlphaFold. Most people had,” he says.

Some predictions were better than others, but nearly two-thirds were comparable in quality to experimental structures. In some cases, says Moult, it was not clear whether the discrepancy between AlphaFold’s predictions and the experimental result was a prediction error or an artefact of the experiment. 

AlphaFold’s predictions were poor matches to experimental structures determined by a technique called nuclear magnetic resonance spectroscopy, but this could be down to how the raw data is converted into a model, says Moult. The network also struggles to model individual structures in protein complexes, or groups, whereby interactions with other proteins distort their shapes.

Overall, teams predicted structures more accurately this year, compared with the last CASP, but much of the progress can be attributed to AlphaFold, says Moult. On protein targets considered to be moderately difficult, the best performances of other teams typically scored 75 on a 100-point scale of prediction accuracy, whereas AlphaFold scored around 90 on the same targets, says Moult.

About half of the teams mentioned ‘deep learning’ in the abstract summarizing their approach, Moult says, suggesting that AI is making a broad impact on the field. Most of these were from academic teams, but Microsoft and the Chinese technology company Tencent also entered CASP14.

Mohammed AlQuraishi, a computational biologist at Columbia University in New York City and a CASP participant, is eager to dig into the details of AlphaFold’s performance at the contest, and learn more about how the system works when the DeepMind team presents its approach on 1 December. It’s possible — but unlikely, he says — that an easier-than-usual crop of protein targets contributed to the performance. AlQuraishi’s strong hunch is that AlphaFold will be transformational.

“I think it’s fair to say this will be very disruptive to the protein-structure-prediction field. I suspect many will leave the field as the core problem has arguably been solved,” he says. “It’s a breakthrough of the first order, certainly one of the most significant scientific results of my lifetime.”

Faster structures

An AlphaFold prediction helped to determine the structure of a bacterial protein that Lupas’s lab has been trying to crack for years. Lupas’s team had previously collected raw X-ray diffraction data, but transforming these Rorschach-like patterns into a structure requires some information about the shape of the protein. Tricks for getting this information, as well as other prediction tools, had failed. “The model from group 427 gave us our structure in half an hour, after we had spent a decade trying everything,” Lupas says.

Demis Hassabis, DeepMind’s co-founder and chief executive, says that the company plans to make AlphaFold useful so other scientists can employ it. (It previously published enough details about the first version of AlphaFold for other scientists to replicate the approach.) It can take AlphaFold days to come up with a predicted structure, which includes estimates on the reliability of different regions of the protein. “We’re just starting to understand what biologists would want,” adds Hassabis, who sees drug discovery and protein design as potential applications.

In early 2020, the company released predictions of the structures of a handful of SARS-CoV-2 proteins that hadn’t yet been determined experimentally. DeepMind’s predictions for a protein called Orf3a ended up being very similar to one later determined through cryo-EM, says Stephen Brohawn, a molecular neurobiologist at the University of California, Berkeley, whose team released the structure in June. “What they have been able to do is very impressive,” he adds.

Real-world impact

AlphaFold is unlikely to shutter labs, such as Brohawn’s, that use experimental methods to solve protein structures. But it could mean that lower-quality and easier-to-collect experimental data would be all that’s needed to get a good structure. Some applications, such as the evolutionary analysis of proteins, are set to flourish because the tsunami of available genomic data might now be reliably translated into structures. “This is going to empower a new generation of molecular biologists to ask more advanced questions,” says Lupas. “It’s going to require more thinking and less pipetting.”

“This is a problem that I was beginning to think would not get solved in my lifetime,” says Janet Thornton, a structural biologist at the European Molecular Biology Laboratory-European Bioinformatics Institute in Hinxton, UK, and a past CASP assessor. She hopes the approach could help to illuminate the function of the thousands of unsolved proteins in the human genome, and make sense of disease-causing gene variations that differ between people.

AlphaFold’s performance also marks a turning point for DeepMind. The company is best known for wielding AI to master games such Go, but its long-term goal is to develop programs capable of achieving broad, human-like intelligence. Tackling grand scientific challenges, such as protein-structure prediction, is one of the most important applications its AI can make, Hassabis says. “I do think it’s the most significant thing we’ve done, in terms of real-world impact.”

doi: https://doi.org/10.1038/d41586-020-03348-4

Read me...

Sci-Advent - Physicists Nail Down the ‘Magic Number’ That Shapes the Universe

This is a reblog of the article in Nautilus by Natalie Wolchover. See the original here.

A team in Paris has made the most precise measurement yet of the fine-structure constant, killing hopes for a new force of nature.

As fundamental constants go, the speed of light, c, enjoys all the fame, yet c’s numerical value says nothing about nature; it differs depending on whether it’s measured in meters per second or miles per hour. The fine-structure constant, by contrast, has no dimensions or units. It’s a pure number that shapes the universe to an astonishing degree — “a magic number that comes to us with no understanding,” as Richard Feynman described it. Paul Dirac considered the origin of the number “the most fundamental unsolved problem of physics.”

Numerically, the fine-structure constant, denoted by the Greek letter α (alpha), comes very close to the ratio 1/137. It commonly appears in formulas governing light and matter. “It’s like in architecture, there’s the golden ratio,” said Eric Cornell, a Nobel Prize-winning physicist at the University of Colorado, Boulder and the National Institute of Standards and Technology. “In the physics of low-energy matter — atoms, molecules, chemistry, biology — there’s always a ratio” of bigger things to smaller things, he said. “Those ratios tend to be powers of the fine-structure constant.”

The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons. “In our everyday world, everything is either gravity or electromagnetism. And that’s why alpha is so important,” said Holger Müller, a physicist at the University of California, Berkeley. Because 1/137 is small, electromagnetism is weak; as a consequence, charged particles form airy atoms whose electrons orbit at a distance and easily hop away, enabling chemical bonds. On the other hand, the constant is also just big enough: Physicists have argued that if it were something like 1/138, stars would not be able to create carbon, and life as we know it wouldn’t exist.

Physicists have more or less given up on a century-old obsession over where alpha’s particular value comes from; they now acknowledge that the fundamental constants could be random, decided in cosmic dice rolls during the universe’s birth. But a new goal has taken over.

Physicists want to measure the fine-structure constant as precisely as possible. Because it’s so ubiquitous, measuring it precisely allows them to test their theory of the interrelationships between elementary particles — the majestic set of equations known as the Standard Model of particle physics. Any discrepancy between ultra-precise measurements of related quantities could point to novel particles or effects not accounted for by the standard equations. Cornell calls these kinds of precision measurements a third way of experimentally discovering the fundamental workings of the universe, along with particle colliders and telescopes.

Today, in a new paper in the journal Nature, a team of four physicists led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris reported the most precise measurement yet of the fine-structure constant. The team measured the constant’s value to the 11th decimal place, reporting that α = 1/137.03599920611. (The last two digits are uncertain.)

With a margin of error of just 81 parts per trillion, the new measurement is nearly three times more precise than the previous best measurement in 2018 by Müller’s group at Berkeley, the main competition. (Guellati-Khélifa made the most precise measurement before Müller’s in 2011.) Müller said of his rival’s new measurement of alpha, “A factor of three is a big deal. Let’s not be shy about calling this a big accomplishment.”

Guellati-Khélifa has been improving her experiment for the past 22 years. She gauges the fine-structure constant by measuring how strongly rubidium atoms recoil when they absorb a photon. (Müller does the same with cesium atoms.) The recoil velocity reveals how heavy rubidium atoms are — the hardest factor to gauge in a simple formula for the fine-structure constant. “It’s always the least accurate measurement that’s the bottleneck, so any improvement in that leads to an improvement in the fine-structure constant,” Müller explained.

The Paris experimenters begin by cooling the rubidium atoms almost to absolute zero, then dropping them in a vacuum chamber. As the cloud of atoms falls, the researchers use laser pulses to put the atoms in a quantum superposition of two states — kicked by a photon and not kicked. The two possible versions of each atom travel on separate trajectories until more laser pulses bring the halves of the superposition back together. The more an atom recoils when kicked by light, the more out of phase it is with the unkicked version of itself. The researchers measure this difference to reveal the atoms’ recoil velocity. “From the recoil velocity, we extract the mass of the atom, and the mass of the atom is directly involved in the determination of the fine-structure constant,” Guellati-Khélifa said.

In such precise experiments, every detail matters. Table 1 of the new paper is an “error budget” listing 16 sources of error and uncertainty that affect the final measurement. These include gravity and the Coriolis force created by Earth’s rotation — both painstakingly quantified and compensated for. Much of the error budget comes from foibles of the laser, which the researchers have spent years perfecting.

For Guellati-Khélifa, the hardest part is knowing when to stop and publish. She and her team stopped the week of February 17, 2020, just as the coronavirus was gaining a foothold in France. Asked whether deciding to publish is like an artist deciding that a painting is finished, Guellati-Khélifa said, “Exactly. Exactly. Exactly.”

Surprisingly, her new measurement differs from Müller’s 2018 result in the seventh digit, a bigger discrepancy than the margin of error of either measurement. This means — barring some fundamental difference between rubidium and cesium — that one or both of the measurements has an unaccounted-for error. The Paris group’s measurement is the more precise, so it takes precedence for now, but both groups will improve their setups and try again.

Though the two measurements differ, they closely match the value of alpha inferred from precise measurements of the electron’s g-factor, a constant related to its magnetic moment, or the torque that the electron experiences in a magnetic field. “You can connect the fine-structure constant to the g-factor with a hell of a lot of math,” said Cornell. “If there are any physical effects missing from the equations , we would be getting the answer wrong.”

Instead, the measurements match beautifully, largely ruling out some proposals for new particles. The agreement between the best g-factor measurements and Müller’s 2018 measurement was hailed as the Standard Model’s greatest triumph. Guellati-Khélifa’s new result is an even better match. “It’s the most precise agreement between theory and experiment,” she said.

And yet she and Müller have both set about making further improvements. The Berkeley team has switched to a new laser with a broader beam (allowing it to strike their cloud of cesium atoms more evenly), while the Paris team plans to replace their vacuum chamber, among other things.

What kind of person puts such a vast effort into such scant improvements? Guellati-Khélifa named three traits: “You have to be rigorous, passionate and honest with yourself.” Müller said in response to the same question, “I think it’s exciting because I love building shiny nice machines. And I love applying them to something important.” He noted that no one can single-handedly build a high-energy collider like Europe’s Large Hadron Collider. But by constructing an ultra-precise instrument rather than a super-energetic one, Müller said, “you can do measurements relevant to fundamental physics, but with three or four people.”

Read me...

Fluorescent Platypuses (??)

I was pleasently surprised and bewildered about this article in the New York Times reporting on a recent paper in the journal Mammalia about fluorescence being observed in platypuses when shining ultraviolet light on them... Yes! Not only are platypuses the most extraordinary collection of oddities from being mammals that lay eggs, webbed feet and duck-like bills as well as being venomous... and now they also fluoresce!

Now it is nothing strictly new. As Cara Giamo reports in the NYT article "A lot of living things do, too." (That is fluoresce). Scorpionslichens and puffin beaks all pop under UV light. As far as mammals go not many have this ability, but there are some examples such as a rainbow of opossums or a bright pink flying squirrel.

As to why the platypus fluoresces, well, that remains a mystery!

Read me...

2020 Nobel Prize in Physics - Black holes

I had intended to post this much ealier on, and certainly closer to the actual announcement of the Nobel Prizes in early October. It has however been a very busy period. Better late than never, right?

I was very pleased to see that the winners of the 2020 Nobel Prize in Physics were a group that combined the observational with the theoretical. Sir Roger Penrose, Reinhard Genzel, and Andrea Ghez are the recipients of the 2020 Nobel Prize in Physics. Penrose receives half the 10 million Swedish krona while Ghez and Genzel will share the other half.

Penrose's work has taken the concept of black holes from the realm of speculation to a sound theoretical idea underpinning modern astrophysics. With the use of topology and general relativity, Penrose has provided us with an explanation to the collapse of matter due to gravity leading to the singularity at the centre of a black hole.

A few decades after the 1960's work from Penrose we have Genzel and Ghez whose independent work using adaptive optics and speckle imaging enabled them to analyse the motion of stars tightly orbiting Sagittarius A*. Their work led to the conclusion that the only explanation for the radio source at the centre of the Milky Way’s was a black hole.

Ghez is the fourth woman to be named a Nobel physics laureate, after Donna Strickland (2018), Maria Goeppert Mayer (1963), and Marie Curie (1903).

From an Oddity to an Observation

In 1916 Karl Schwarzwild described a solution to Einstein's field equation for the curved spacetime around a mass of radius $latex r$. Some terms in the solution either diverged or vanished for $latex r=\frac{2GM}{c}$ or $latex r=0$. A couple of decades later, Oppenheimer and his student Hartland Snyder realised that the former value corresponded to the radius within which light, under the influence of gravity, would no longer be able to reach outside observers - the so called event horizon. Their work would need more than mathematical assumptions to be accepted.

By 1964 Penrose came up with topological picture of the gravitational collapse described and crucially doing so without the assumptions made by Oppenheimer and Snyder. His work required instead the idea of a trapped surface. In other words a 2D surface in which all light orthogonal to it converges. Penrose's work showed that inside the event horizon, the radial direction becomes time-like. It is impossible to reverse out of the black hole and the implication is that all matter ends up at the singularity. Penrose's research established black holes as plausible explanation for objets such s quasars and other active galactic nuclei.

Closer to Home

Although our own galaxy is by no means spewing energy like your average quasar, it still emits X-rays and other radio signals. Could it be that there is a black hole-like object at the heart of the Milky Way? This was a question that Genzel and Ghez would come to answer in time.

With the use of infrared (IR) spectroscopy, studies of gas clouds near the galactic centre showed rising velocities with decreasing distances to the centre, suggesting the presence of a massive, compact source of gravitation. These studies in the 1980s were not definitive but provided a tantalising possibility.

In the mid 1990s, both Genzel and Ghez set out to obtain better evidence with the help of large telescopes operating in the near-IR to detect photons escaping the galactic center. Genzel and colleagues began observing from Chile, whereas Ghez and her team from Hawaii.

Their independent development of speckle imaging, a technique that corrects for the distortions caused by Earth’s atmosphere enabled them to make the crucial observations. The technique improves the images by stacking a series of exposures, bringing the smeared light of individual stars into alignment. In 1997, both groups published their measurements stars movements strongly favouring the black hole explanation.

Further to that work, the use of adaptive optics by both laureates not only improved the resolutions obtained, but also provided the possibility of carrying out spectroscopic analyses which enabled them to get velocities in 3D and therefore obtain precise orbits.

The "star" object in this saga is the so-called S0-2 (Ghez’s group) or S2 (Genzel’s group) star. It approaches within about 17 light-hours of Sagittarius A* every 16 years in a highly elliptical orbit.

Congratulations to Ghez and Genzel, and Penrose.

Read me...

Let there be light: Florence Nightingale

This year, 2020, the word Nightingale has acquired new connotations. It is no longer just a word to refer to a passerine bird with beautiful and powerful birdsong, it is the name that NHS England has given to the temporary hospitals set up for the COVID-19 pandemic. In normal circumstances it is indeed a very good name to use for a hospital, but given the circumstances, it becomes more poignant. It is even more so considering the fact that this year, 2020, is the bicentenary go Florence Nightingale's birth.

Florence Nightingale was born on 12th May, 1820 in Florence, Italy (hence the name!) and became a social reformer, statistician, and the founder of modern nursing. She became the first woman to be elected to be a Fellow of the Royal Society in 1874.

With the power of data, Nightingale was able to save lives and change policy. Her analysis of data from the Crimean War was compelling and persuasive in its simplicity. It allowed her and her team to pay attention to time - tracking admissions to hospital and crucially deaths - on a month by month basis. We must remember that the power of statistical tests as we know today were not established tools and the work horse of statistics, regression, was decades in the future. The data analysis presented in columns and rows as supported by powerful graphics that many of us admire today.

In 2014 had an opportunity to admire her Nightingale Roses, or to use its formal name polar area charts, in the exhibition Science is Beautiful at the British Library.

Florence Nightingale's "rose diagram", showing the Causes of Mortality in the Army in the East, 1858. Photograph: /British Library

These and other charts were used in the report that she later published in 1858 under the title "Notes in Matters Affecting the Health, Efficiency, and Hospital Administration of the British Army". The report included charts of deaths by barometric pressure and temperature, showing that deaths were higher in hotter months compared to cooler ones. In polar charts shown above Nightingale presents the decrease in death rates that have been achieved. Let's read it from her own hand; here is the note the accompanying the chart above:

The areas of the blue, red & black wedges are each measured from the centre as the common vortex.

The blue wedges measured from the centre of the circle represent area for area the deaths from Preventible or Mitigable Zymotic diseases, the red wedged measured from the centre the deaths from wounds, & the black wedged measured from the centre the deaths from all other causes.

The black line across the read triangle in Nov. 1854 marks the boundary of the deaths from all other caused during the month.

In October 1854, & April 1855, the black area coincides with the red, in January & February 1855, the blue area coincides with the black.

The entire areas may be compared bu following the blue, the read & the black lines enclosing them.

Nightingale recognised that soldiers were dying from other causes: malnutrition, poor sanitation, and lack of activity. Her aim was to improve the conditions of wounded soldiers and improve their chances of survival. This was evidence that later helped put focus on the importance of patient welfare.

Once the war was over, Florence Nightingale returned home but her quest did not finish there. She continued her work to improve conditions in hospitals. She became a star in her own time and with time the legend of "The Lady with Lamp" solidified in the national and international consciousness. You may have heard of there in the 1857 poem by Henry Wadsworth Longfellow called "Santa Filomena":

Lo! in that house of misery
A lady with a lamp I see
Pass through the glimmering gloom,
And flit from room to room

Today, Nightigale's lamp continues bringing hope to her patients. Not just for those working and being treated in the NHS Nightingale hospitals, but also to to all of us through the metaphorical light of rational optimism. Let there be light.

Read me...

Exponential Growth and Epidemics

With all that is happening with Covid-19, perhaps relevant to remind ourselves about exponential growth.


Read me...

Science Communication - Technical Writing and Presentation Advice

The two videos below were made a few years ago to support a Science Communication and Group Project module at the School of Physics Astronomy and Mathematics at the University of Hertfordshire. The work was supported by the Institute of Physics and the HE STEM programme. I also got support from the Institute of Mathematics and its Applications. The tools are probably a bit dated now, but I hope the principles still help some students trying to get their work seen.

The students were encouraged to share and communicate the results of their projects via a video and they were supported by tutorials on how to do screencasts.

Students were also encouraged to prepare technical documentation and the videos for using LaTeX and structuring their documents with LaTeXwere very useful.

Technical Writing

This presentation addresses some issues we should take into account when writing for technical purposes.


Presentation Advice

In this tutorial we will address some of points that can help you make a better presentation either for a live talk or for recording.


Read me...

Screencasting with Macs and PCs

The videos below were made a few years ago to support a Science Communication and Group Project module at the School of Physics Astronomy and Mathematics at the University of Hertfordshire. The work was supported by the Institute of Physics and the HE STEM programme. I also got support from the Institute of Mathematics and its Applications. The tools are probably a bit dated now, but I hope the principles still help some students trying to get their work seen.

Students were asked to prepare a short video to present the results of their project and share it with the world. To support them, the videos below were prepared.

Students were also encouraged to prepare technical documentation and the videos for using LaTeX and structuring their documents with LaTeX were very useful.

Screencasting with a Mac

In this video we will see some tools to capture video from your screen using a Mac. The tools are Quicktime Player, MPEG Streamclip and iMovie.


Screencasting with a PC

In this video we will see some tools to capture video from your screen using a PC. The tools are CamStudio and Freemake Video Converter.


Uploading a Video to Vimeo

In this tutorial we will see how to set up an account in Vimeo and how to upload your screencast. Also you will be able to send a link to your video to you friends and other people.


Read me...

Structured Documents in LaTeX

This is a video I made a few years ago to encourage my students to use better tools to write dissertations, thesis and reports that include the use of mathematics. The principles stand, although the tools may have moved on since then. I am reposting them as requested by a colleague of mine, Dr Catarina Carvalho, who I hope will still find this useful.

In this video we continue explaining how to use LaTeX. Here we will see how to use a master document in order to build a thesis or dissertation.
We assume that you have already had a look at the tutorial entitled: LaTeX for writing mathematics - An introduction

Structured Documents in LaTeX


Read me...

LaTeX for writing mathematics - An introduction

This is a video I made a few years ago to encourage my students to use better tools to write dissertations, thesis and reports that include the use of mathematics. The principles stand, although the tools may have moved on since then. I am reposting them as requested by a colleague of mine, Dr Catarina Carvalho, who I hope will still find this useful.

In this video we explore the LaTeX document preparation system. We start with a explaining an example document. We have made use of TeXmaker as an editor given its flexibility and the fact that it is available for different platforms.

LaTeX for writing mathematics - An introduction


Read me...

The Year 2019 in Physics

Physicists saw a black hole for the first time, debated the expansion rate of the universe, pondered the origin of time and modeled the end of clouds.
— Read on www.quantamagazine.org/quantas-year-in-physics-2019-20191223/

Read me...

2019 Nobel Prize in Chemistry

From left: John Goodenough, M. Stanley Whittingham, and Akira Yoshino. Credits: University of Texas at Austin; Binghamton University; the Japan Prize Foundation

Originally published in Physics Today by Alex Lopatka

John Goodenough, M. Stanley Whittingham, and Akira Yoshino will receive the 2019 Nobel Prize in Chemistry for developing lithium-ion batteries, the Royal Swedish Academy of Sciences announced on Wednesday. Goodenough (University of Texas at Austin), Whittingham (Binghamton University in New York), and Yoshino (Asahi Kasei Corp and Meijo University in Japan) will each receive one-third of the 9 million Swedish krona (roughly $900 000) prize. Their research not only allowed for the commercial-scale manufacture of lithium-ion batteries, but it also has supercharged research into all sorts of new technology, including wind and solar power.

At the heart of any battery is a redox reaction. During the discharge phase, the oxidation reaction at the anode frees ions to travel through a liquid electrolyte solution to the cathode, which is undergoing a reduction reaction. Meanwhile, electrons hum through a circuit to power a connected electronic device. For the recharge phase, the redox processes reverse, and the ions go back to the anode so that it’s ready for another discharge cycle.

The now ubiquitous lithium-ion battery that powers smartphones, electric vehicles, and more got its start shortly before the 1973 oil crisis. The American Energy Commission asked Goodenough, who was then at MIT’s Lincoln Laboratory, to evaluate a project by battery scientists at the Ford Motor Company. They were looking into the feasibility of molten-salt batteries, which used sodium and sulfur, to replace the standard but outdated lead–acid batteries developed about a century earlier. But by the late 1960s, it became clear that high operating temperatures and corrosion problems made those batteries impractical (see the article by Matthew Eisler, Physics Today, September 2016, page 30).

Whittingham, then a research scientist at Exxon, instead considered low-temperature, high-energy batteries that could not only power electric vehicles but also store solar energy during off-peak hours. To that end he developed a battery in 1976 with a titanium disulfide cathode paired with a lithium metal anode. Lithium’s low standard reduction potential of −3.05 V makes it especially attractive for high-density and high-voltage battery cells. Critically, Whittingham’s design employed lithium ions that were intercalated—that is, inserted between layers of the TiS2 structure—and provided a means to reversibly store the lithium during the redox reactions.

Illustration of Whittingham's battery.
The lithium-ion battery designed by M. Stanley Whittingham had a titanium disulfide cathode and a lithium metal anode, as illustrated here. John Goodenough and Akira Yoshino improved on the technology by replacing the cathode and anode with lithium cobalt oxide and graphite, respectively. Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

Lithium’s high reactivity, however, means that it must be isolated from air and water to avoid dangerous reactions. Whittingham solved that problem by using nonaqueous electrolyte solutions that had been carefully designed and tested by other researchers in lithium electrochemistry experiments conducted a few years earlier. The proof of concept was a substantial improvement: Whittingham’s lithium-ion battery had a higher cell potential than the lead–acid battery’s—2.5 V compared with 2 V.

Whittingham’s lithium-ion battery, though, wasn’t particularly stable. After repeated discharging and recharging, whisker-like crystals of lithium would grow on the anode. Eventually the wispy threads would grow large enough to breach the barrier separating the anode from the cathode, and the battery would short-circuit or even explode.

In 1980 Goodenough didn’t solve that problem, but he did come up with a much better material for the cathode. Along with Koichi Mizushima and colleagues at Oxford University, he found that lithium cobalt oxide could be used for the cathode. As with the TiS2, the cobalt oxide structure was tightly intercalated with lithium and could thus provide the cathode with sufficient energy density. Goodenough’s insight into the relationship between the cobalt oxide structure and voltage potential resulted in better battery performance; the voltage increased from 2.5 V to 4 V. Although the new battery was an improvement over Whittingham’s design, the system still used highly reactive lithium metal as the anode, so companies couldn’t safely manufacture the batteries on a commercial scale.

The final piece of the puzzle fell into place in 1985 when Yoshino, working at the Asahi Kasei Corp, replaced the anode material with graphite. It was stable in the required electrochemical conditions and accommodated many lithium ions in graphite’s crystal structure. With Goodenough’s lithium cobalt oxide cathode and the graphite anode, Yoshino, “came up with two materials you could put together without a glove box” in a chemistry laboratory, says Clare Grey, a chemist at the University of Cambridge. Importantly, the graphite anode is lightweight and capable of being recharged hundreds of times before its performance deteriorates. Soon after, Sony teamed up with Asahi Kasei and replaced all the nickel–cadmium batteries in its consumer electronics with lithium-ion ones.

“The story of the lithium-ion battery, like so many stories about innovation, is about contributions from many sources over many years, conditioned by changing economic and social circumstances,” says Matthew Eisler, a historian of science at the University of Strathclyde in Glasgow, UK. When the 1979 oil crisis ended, the automotive industry’s interest in batteries drained, but in 1991 they were commercialized for use in cameras, laptops, smartphones, and other handheld electronics enabled by advancements in microprocessor technology.

To develop transportation that doesn’t rely on fossil fuels, the US Department of Energy in 2013 set an ambitious goal for its Joint Center for Energy Storage Research: Make a battery for electric vehicles that has five times the energy density and is one-fifth the cost of currently available batteries. DOE’s goal hasn’t been reached yet, but the program was renewed in September 2018, with dedicated funding of $120 million over the next five years. In a story on the center, Goodenough told Physics Today (June 2013, page 26), “People are working hard, and I believe the problem is solvable, but to get to the next stage, it’s going to take a little luck and some cleverness.”

Editor’s note: This post was updated at 7:15pm EDT from an earlier summary.

Read me...

Catching up with some reading - drug discovery and synthetic life

Catching up with some reading. Very timely, PhysicsWorld is covering some new developments in high-spec mass spectroscopy and drug discovery. While The Economist's front cover is about synthetic biology. Yay!

Soupy twist!

Read me...

The Year in Math and Computer Science

A reblog from Quanta Magazine:


Several mathematicians under the age of 30, and amateur problem-solvers of all ages, made significant contributions to some of the most difficult questions in math and theoretical computer science.

Youth ruled the year in mathematics. The Fields Medals — awarded every four years to the top mathematicians no older than 40 — went out to four individuals who have left their marks all over the mathematical landscape. This year one of the awards went to Peter Scholze, who at 30 became one of the youngest ever to win. But at times in 2018, even 30 could feel old.

Two students, one in graduate school and the other just 18, in two separate discoveries, remapped the borders that separate quantum computers from ordinary classical computation. Another graduate student proved a decades-old conjecture about elliptic curves, a type of object that has fascinated mathematicians for centuries. And amateur mathematicians of all ages rose up to make significant contributions to long-dormant problems.

But perhaps the most significant sign of youth’s rise was when Scholze, not a month after the Fields Medal ceremony, made public (along with a collaborator) his map pointing to a hole in a purported proof of the famous abc conjecture. The proof, put forward six years ago by a mathematical luminary, has baffled most mathematicians ever since.

Read me...

Physics Wold

Now reading my monthly issue of "Physics Wolrd".


An interesting Focus issue on biomedical physics. This article on developing clinical partnerships is a recommended read.

Read me...

A new Bose-Einstein condensate

Originally published here.

A new Bose-Einstein condensate


Although Bose-Einstein condensation has been observed in several systems, the limits of the phenomenon need to be pushed further: to faster timescales, higher temperatures, and smaller sizes. The easier creating these condensates gets, the more exciting routes open for new technological applications. New light sources, for example, could be extremely small in size and allow fast information processing.

In experiments by Aalto researchers, the condensed particles were mixtures of light and electrons in motion in gold nanorods arranged into a periodic array. Unlike most previous Bose-Einstein condensates created experimentally, the new condensate does not need to be cooled down to temperatures near absolute zero. Because the particles are mostly light, the condensation could be induced in room temperature.

'The gold nanoparticle array is easy to create with modern nanofabrication methods. Near the nanorods, light can be focused into tiny volumes, even below the wavelength of light in vacuum. These features offer interesting prospects for fundamental studies and applications of the new condensate,' says Academy Professor Päivi Törmä.

The main hurdle in acquiring proof of the new kind of condensate is that it comes into being extremely quickly.'According to our theoretical calculations, the condensate forms in only a picosecond,' says doctoral student Antti Moilanen. 'How could we ever verify the existence of something that only lasts one trillionth of a second?'

Turning distance into time

A key idea was to initiate the condensation process with a kick so that the particles forming the condensate would start to move.

'As the condensate takes form, it will emit light throughout the gold nanorod array. By observing the light, we can monitor how the condensation proceeds in time. This is how we can turn distance into time,' explains staff scientist Tommi Hakala.

The light that the condensate emits is similar to laser light. 'We can alter the distance between each nanorod to control whether Bose-Einstein condensation or the formation of ordinary laser light occurs. The two are closely related phenomena, and being able to distinguish between them is crucial for fundamental research. They also promise different kinds of technological applications,' explains Professor Törmä.

Both lasing and Bose-Einstein condensation provide bright beams, but the coherences of the light they offer have different properties. These, in turn, affect the ways the light can be tuned to meet the requirements of a specific application. The new condensate can produce light pulses that are extremely short and may offer faster speeds for information processing and imaging applications. Academy Professor Törmä has already obtained a Proof of Concept grant from the European Research Council to explore such prospects.

Materials provided by Aalto University. Note: Content may be edited for style and length.

Journal Reference:

1 Tommi K. Hakala, Antti J. Moilanen, Aaro I. Väkeväinen, Rui Guo, Jani-Petri Martikainen, Konstantinos S. Daskalakis, Heikki T. Rekola, Aleksi Julku, Päivi Törmä. Bose–Einstein condensation in a plasmonic lattice. Nature Physics, 2018; DOI: 10.1038/s41567-018-0109-9

Read me...

New quantum method generates really random numbers

Originally appeared in ScienceDaily, 11 April 2018.

New quantum method generates really random numbers

Researchers at the National Institute of Standards and Technology (NIST) have developed a method for generating numbers guaranteed to be random by quantum mechanics. Described in the April 12 issue of Nature, the experimental technique surpasses all previous methods for ensuring the unpredictability of its random numbers and may enhance security and trust in cryptographic systems.

The new NIST method generates digital bits (1s and 0s) with photons, or particles of light, using data generated in an improved version of a landmark 2015 NIST physics experiment. That experiment showed conclusively that what Einstein derided as "spooky action at a distance" is real. In the new work, researchers process the spooky output to certify and quantify the randomness available in the data and generate a string of much more random bits.

Random numbers are used hundreds of billions of times a day to encrypt data in electronic networks. But these numbers are not certifiably random in an absolute sense. That's because they are generated by software formulas or physical devices whose supposedly random output could be undermined by factors such as predictable sources of noise. Running statistical tests can help, but no statistical test on the output alone can absolutely guarantee that the output was unpredictable, especially if an adversary has tampered with the device.

"It's hard to guarantee that a given classical source is really unpredictable," NIST mathematician Peter Bierhorst said. "Our quantum source and protocol is like a fail-safe. We're sure that no one can predict our numbers."

"Something like a coin flip may seem random, but its outcome could be predicted if one could see the exact path of the coin as it tumbles. Quantum randomness, on the other hand, is real randomness. We're very sure we're seeing quantum randomness because only a quantum system could produce these statistical correlations between our measurement choices and outcomes."

The new quantum-based method is part of an ongoing effort to enhance NIST's public randomness beacon, which broadcasts random bits for applications such as secure multiparty computation. The NIST beacon currently relies on commercial sources.

Quantum mechanics provides a superior source of randomness because measurements of some quantum particles (those in a "superposition" of both 0 and 1 at the same time) have fundamentally unpredictable results. Researchers can easily measure a quantum system. But it's hard to prove that measurements are being made of a quantum system and not a classical system in disguise.

In NIST's experiment, that proof comes from observing the spooky quantum correlations between pairs of distant photons while closing the "loopholes" that might otherwise allow non-random bits to appear to be random. For example, the two measurement stations are positioned too far apart to allow hidden communications between them; by the laws of physics any such exchanges would be limited to the speed of light.

Random numbers are generated in two steps. First, the spooky action experiment generates a long string of bits through a "Bell test," in which researchers measure correlations between the properties of the pairs of photons. The timing of the measurements ensures that the correlations cannot be explained by classical processes such as pre-existing conditions or exchanges of information at, or slower than, the speed of light. Statistical tests of the correlations demonstrate that quantum mechanics is at work, and these data allow the researchers to quantify the amount of randomness present in the long string of bits.

That randomness may be spread very thin throughout the long string of bits. For example, nearly every bit might be 0 with only a few being 1. To obtain a short, uniform string with concentrated randomness such that each bit has a 50/50 chance of being 0 or 1, a second step called "extraction" is performed. NIST researchers developed software to process the Bell test data into a shorter string of bits that are nearly uniform; that is, with 0s and 1s equally likely. The full process requires the input of two independent strings of random bits to select measurement settings for the Bell tests and to "seed" the software to help extract the randomness from the original data. NIST researchers used a conventional random number generator to generate these input strings.

From 55,110,210 trials of the Bell test, each of which produces two bits, researchers extracted 1,024 bits certified to be uniform to within one trillionth of 1 percent.

"A perfect coin toss would be uniform, and we made 1,024 bits almost perfectly uniform, each extremely close to equally likely to be 0 or 1," Bierhorst said.

Other researchers have previously used Bell tests to generate random numbers, but the NIST method is the first to use a loophole-free Bell test and to process the resulting data through extraction. Extractors and seeds are already used in classical random number generators; in fact, random seeds are essential in computer security and can be used as encryption keys.

In the new NIST method, the final numbers are certified to be random even if the measurement settings and seed are publicly known; the only requirement is that the Bell test experiment be physically isolated from customers and hackers. "The idea is you get something better out (private randomness) than what you put in (public randomness)," Bierhorst said.

Story Source:

Materials provided by National Institute of Standards and Technology (NIST)Note: Content may be edited for style and length.

Journal Reference:

  1. Peter Bierhorst, Emanuel Knill, Scott Glancy, Yanbao Zhang, Alan Mink, Stephen Jordan, Andrea Rommal, Yi-Kai Liu, Bradley Christensen, Sae Woo Nam, Martin J. Stevens, Lynden K. Shalm. Experimentally Generated Randomness Certified by the Impossibility of Superluminal SignalsNature, 2018 DOI: 10.1038/s41586-018-0019-0
Read me...

Physicists are planning to build lasers so powerful they could rip apart empty space

Physicists are planning to build lasers so powerful they could rip apart empty space | Science | AAAS
Physicists are planning to build lasers so powerful they could rip apart empty space

By Edwin Cartlidge


A laser in Shanghai, China, has set power records yet fits on tabletops.


Inside a cramped laboratory in Shanghai, China, physicist Ruxin Li and colleagues are breaking records with the most powerful pulses of light the world has ever seen. At the heart of their laser, called the Shanghai Superintense Ultrafast Laser Facility (SULF), is a single cylinder of titanium-doped sapphire about the width of a Frisbee. After kindling light in the crystal and shunting it through a system of lenses and mirrors, the SULF distills it into pulses of mind-boggling power. In 2016, it achieved an unprecedented 5.3 million billion watts, or petawatts (PW). The lights in Shanghai do not dim each time the laser fires, however. Although the pulses are extraordinarily powerful, they are also infinitesimally brief, lasting less than a trillionth of a second. The researchers are now upgrading their laser and hope to beat their own record by the end of this year with a 10-PW shot, which would pack more than 1000 times the power of all the world's electrical grids combined.

The group's ambitions don't end there. This year, Li and colleagues intend to start building a 100-PW laser known as the Station of Extreme Light (SEL). By 2023, it could be flinging pulses into a chamber 20 meters underground, subjecting targets to extremes of temperature and pressure not normally found on Earth, a boon to astrophysicists and materials scientists alike. The laser could also power demonstrations of a new way to accelerate particles for use in medicine and high-energy physics. But most alluring, Li says, would be showing that light could tear electrons and their antimatter counterparts, positrons, from empty space—a phenomenon known as "breaking the vacuum." It would be a striking illustration that matter and energy are interchangeable, as Albert Einstein's famous E=mc2 equation states. Although nuclear weapons attest to the conversion of matter into immense amounts of heat and light, doing the reverse is not so easy. But Li says the SEL is up to the task. "That would be very exciting," he says. "It would mean you could generate something from nothing."

The Chinese group is "definitely leading the way" to 100 PW, says Philip Bucksbaum, an atomic physicist at Stanford University in Palo Alto, California. But there is plenty of competition. In the next few years, 10-PW devices should switch on in Romania and the Czech Republic as part of Europe's Extreme Light Infrastructure, although the project recently put off its goal of building a 100-PW-scale device. Physicists in Russia have drawn up a design for a 180-PW laser known as the Exawatt Center for Extreme Light Studies (XCELS), while Japanese researchers have put forward proposals for a 30-PW device.

Largely missing from the fray are U.S. scientists, who have fallen behind in the race to high powers, according to a study published last month by a National Academies of Sciences, Engineering, and Medicine group that was chaired by Bucksbaum. The study calls on the Department of Energy to plan for at least one high-power laser facility, and that gives hope to researchers at the University of Rochester in New York, who are developing plans for a 75-PW laser, the Optical Parametric Amplifier Line (OPAL). It would take advantage of beamlines at OMEGA-EP, one of the country's most powerful lasers. "The [Academies] report is encouraging," says Jonathan Zuegel, who heads the OPAL.

Invented in 1960, lasers use an external "pump," such as a flash lamp, to excite electrons within the atoms of a lasing material—usually a gas, crystal, or semiconductor. When one of these excited electrons falls back to its original state it emits a photon, which in turn stimulates another electron to emit a photon, and so on. Unlike the spreading beams of a flashlight, the photons in a laser emerge in a tightly packed stream at specific wavelengths.

Because power equals energy divided by time, there are basically two ways to maximize it: Either boost the energy of your laser, or shorten the duration of its pulses. In the 1970s, researchers at Lawrence Livermore National Laboratory (LLNL) in California focused on the former, boosting laser energy by routing beams through additional lasing crystals made of glass doped with neodymium. Beams above a certain intensity, however, can damage the amplifiers. To avoid this, LLNL had to make the amplifiers ever larger, many tens of centimeters in diameter. But in 1983, Gerard Mourou, now at the École Polytechnique near Paris, and his colleagues made a breakthrough. He realized that a short laser pulse could be stretched in time—thereby making it less intense—by a diffraction grating that spreads the pulse into its component colors. After being safely amplified to higher energies, the light could be recompressed with a second grating. The end result: a more powerful pulse and an intact amplifier.

Read me...

Now reading: "Creation”

Now reading: "Creation: The Origin of Life" by Adam Rutherford

Read me...

Geek Reading

Some time for a bit of geek reading.

Read me...

LIGO Architects Win Nobel Prize in Physics

Article from Natalie Wolchover - Quanta Magazine

To find the smallest of the small, it pays to dream big. The American physicists Rainer Weiss, Kip Thorne and Barry Barish shared the 2017 Nobel Prize in Physics today for their leading roles in the https://www.quantamagazine.org/gravitational-waves-discovered-at-long-last-20160211/,” tiny ripples in space-time set in motion by faraway cataclysms such as the collisions of black holes. The existence of gravitational waves was predicted a century ago by Albert Einstein, who assumed they would be far too weak to ever detect. But Weiss, Thorne, Barish and the late Scottish physicist Ronald Drever spent decades building a hypersensitive experiment that did just that, recording contractions and expansions in the fabric of space-time less than one-thousandth the width of an atomic nucleus.

“It’s really wonderful,” Weiss said after learning of the prize this morning. “But I view this more as a thing that is recognizing the work of about 1,000 people, a dedicated effort that’s been going on for, I hate to tell you, as long as 40 years.”

In the 1960s, Thorne, a black hole expert at the California Institute of Technology who is now 77, came to believe that collisions between the invisible monsters he studied should be detectable as gravitational waves. Meanwhile, across the country at the Massachusetts Institute of Technology’s https://www.quantamagazine.org/rainer-weiss-remembering-the-little-room-in-the-plywood-palace-20170615/,” Weiss, now 85, came up with the concept for how to detect them. They, along with Drever, founded in 1984 the project that became the Laser Interferometer Gravitational-Wave Observatory (LIGO). More than three decades later, in September 2015, LIGO’s two giant detectors recorded gravitational waves for the first time.

“This was a high-risk, very-high-potential-payoff enterprise,” Thorne told Quanta last year.

After LIGO’s breakthrough success, he and Weiss were seen as shoo-ins to win a physics Nobel. The committee chose to give half of the award to Weiss and split the other half between Thorne and Barish. (Drever, who died in March, was ineligible as the prize is not awarded posthumously, and the gravitational-wave discovery did not make the deadline for consideration last year.)

Barish’s recognition by the Nobel committee was harder to predict. He “was the organizational genius who made this thing go,” Thorne told Quanta. Barish, a Caltech particle physicist who is now 81, replaced the talented but discordant “troika” of Drever, Thorne and Weiss as leader of LIGO in 1994. Barish established the LIGO Scientific Collaboration, which now has more than 1,000 members, and orchestrated the construction of LIGO’s detectors in Louisiana and Washington state.


Left to right: Kip Thorne, Rainer Weiss and Barry Barish.

From left to right: Courtesy of the Caltech Alumni Association; Bryce Vickmark; R. Hahn

Weiss, Thorne and Barish — all now professors emeritus — and their LIGO collaborators have kick-started a new era of astrophysics by tuning in to these tremors in space-time geometry. As they radiate past Earth, gusts of gravitational waves alternately stretch and squeeze the four-kilometer-long arms of LIGO’s detectors by a fraction of an atom’s width. With princess-and-pea sensitivity, laser beams bouncing along both arms of the L-shape detectors overlap to reveal fleeting differences in the arms’ lengths. By studying the form of a gravitational-wave signal, scientists can extract details about the faraway, long-ago cataclysm that produced it.

Just last week, for example, LIGO announced its fourth and latest gravitational-wave detection. Its two detectors, along with a new detector in Europe called Virgo, registered the signal from two enormous black holes 1.8 billion light-years away. After circling each other for eons, the pair finally collided, radiating three suns’ worth of energy into space in the form of telltale gravitational waves.

These detections are “opening a new window to the universe,” said Olga Botner, an astrophysicist at Uppsala University in Sweden, during the announcement of the prize this morning. Already, the incoming gravitational-wave signals are https://www.quantamagazine.org/colliding-black-holes-tell-new-story-of-stars-20160906/, and initiating a new era of astronomy. Future gravitational-wave observatories with even greater sensitivity could test ideas about quantum gravity and, maybe, detect signals from the Big Bang itself.

“That would be one of the most fascinating things man could do, because it would tell you very much how the universe started,” said Weiss shortly after the announcement. “Gravitational waves, because they are so imperturbable — they go through everything — they will tell you the most information you can get about the earliest instants that go on in the universe.”

This article was updated on October 3, 2017, with additional details from the Nobel Prize announcement. It was also corrected to reflect that Rainer Weiss is now 85.

Read me...

Comet 45P Returns

Comet 45P Returns
An old comet has returned to the inner Solar System. Not only is Comet 45P/Honda–Mrkos–Pajdušáková physically ancient, it was first discovered 13 orbits ago in 1948. Comet 45P spends most of its time out near the orbit of Jupiter and last neared the Sun in 2011. Over the past few months, however, Comet 45P's new sunward plummet has brightened it considerably. Two days ago, the comet passed the closest part of its orbit to the Sun. The comet is currently visible with binoculars over the western horizon just after sunset, not far from the much brighter planet Venus. Pictured, Comet 45P was captured last week sporting a long ion tail with impressive structure. Comet 45P will pass relatively close to the Earth early next month.

Read me...

Shell Game in the LMC

Shell Game in the LMC

An alluring sight in southern skies, the Large Magellanic Cloud (LMC) is seen here through narrowband filters. The filters are designed to transmit only light emitted by ionized sulfur, hydrogen, and oxygen atoms. Ionized by energetic starlight, the atoms emit their characteristic light as electrons are recaptured and the atom transitions to a lower energy state. As a result, this false color image of the LMC seems covered with shell-shaped clouds of ionized gas surrounding massive, young stars. Sculpted by the strong stellar winds and ultraviolet radiation, the glowing clouds, dominated by emission from hydrogen, are known as H II (ionized hydrogen) regions. Itself composed of many overlapping shells, the Tarantula Nebula is the large star forming region at top center. A satellite of o ur Milky Way Galaxy, the LMC is about 15,000 light-years across and lies a mere 180,000 light-years away in the constellation Dorado.

Read me...

Supermoon over Spanish Castle

Supermoon over Spanish Castle

No, this castle was not built with the Moon attached. To create the spectacular juxtaposition, careful planning and a bit of good weather was needed. Pictured, the last supermoon of 2016 was captured last week rising directly beyond one of the towers of Bellver Castle in Palma de Mallorca on the Balearic Islands of Spain. The supermoon was the last full moon of 2016 and known to some as the Oak MoonBellver Castle was built in the early 1300s and has served as a home -- but occasional as a prison -- to numerous kings and queens. The Moon was built about 4.5 billion years ago, possibly resulting from a great collision with a Mars-sized celestial body and Earth. The next supermoon, defined as when the moonappears slightly larger and brighter than usual, will occur on 2017 December 3 and be visible not only behind castles but all over the Earth.

via Space http://ift.tt/2hA2eqg

Read me...

Nobel Prize in Physics 2016: Exotic States of Matter

Yesterday the 2016 Nobel Prize in Physics was announced. I immediately got a few tweets asking for more information about what these "exotic" states of matter were and explain more about them... Well in short the prize was awarded for the  theoretical discoveries that help scientists understand unusual properties of materials, such as superconductivity and superfluidity, that arise at low temperatures.

Physics Nobel 2016

The prize was awarded jointly to David J. Thouless of the University of Washington in Seattle, F. Duncan M. Haldane of Princeton University in New Jersey, and J. Michael Kosterlitz of Brown University in Rhode Island. The citation from the Swedish Academy reads: "for theoretical discoveries of topological phase transitions and topological phases of matter."

"Topo...what?" - I hear you cry... well let us start at the beginning...

Thouless, Haldane and Kosterliz work in a field of physics known as Condensed Matter Physics and it is interested in the physical properties of "condensed" materials such as solids and liquids. You may not know it, but results from research in condensed matter physics have made it possible for you to save a lot of data in your computer's hard drive: the discovery of giant magnetoresistance has made it possible.

The discoveries that the Nobel Committee are highlighting with the prize provide a better understanding of phases of matter such as superconductors, superfluids and thin magnetic films. The discoveries are now guiding the quest for next generation materials for electronics, quantum computing and more. They have developed mathematical models to describe the topological properties of materials in relation to other phenomena such as superconductivity, superfluidity and other peculiar magnetic properties.

Once again that word: "topology"...

So, we know that all matter is formed by atoms. Nonetheless matter can have different properties and appear in different forms, such as solid, liquid, superfluid, magnet, etc. These various forms of matter are often called states of matter or phases. According to condensed matter physics , the different properties of materials originate from the different ways in which the atoms are organised in the materials. Those different organizations of the atoms (or other particles) are formally called the orders in the materials. Topological order is a type of order in zero-temperature phase of matter (also known as quantum matter). In general, topology is the study of geometrical properties and spatial relations unaffected by the continuous change of shape or size of figures. In our case, we are talking about properties of matter that remain unchanged when the object is flattened or expanded.

Although, research originally focused on topological properties in 1-D and 2-D materials, researchers have discovered them in 3-D materials as well. These results are particularly important as they enable us to understanding "exotic" phenomena such as superconductivity, the property of matter that lets electrons travel through materials with zero resistance, and superfluidity, which lets fluids flow with zero loss of kinetic energy. Currently one of the most researched topics in the area is the study of topological insulators, superconductors and metals.

Here is a report from Physics Today about the Nobel Prize announcement:

Thouless, Haldane, and Kosterlitz share 2016 Nobel Prize in Physics

David Thouless, Duncan Haldane, and Michael Kosterlitz are to be awarded the 2016 Nobel Prize in Physics for their work on topological phases and phase transitions, the Royal Swedish Academy of Sciences announced on Tuesday. Thouless, of the University of Washington in Seattle, will receive half the 8 million Swedish krona (roughly $925 000) prize; Haldane, of Princeton University, and Kosterlitz, of Brown University, will split the other half.

This year’s laureates used the mathematical branch of topology to make revolutionary contributions to their field of condensed-matter physics. In 1972 Thouless and Kosterlitz identified a phase transition that opened up two-dimensional systems as a playground for observing superconductivity, superfluidity, and other exotic phenomena. A decade later Haldane showed that topology is important in considering the properties of 1D chains of magnetic atoms. Then in the 1980s Thouless and Haldane demonstrated that the unusual behavior exhibited in the quantum Hall effect can emerge without a magnetic field.

From early on it was clear that the laureates’ work would have important implications for condensed-matter theory. Today experimenters are studying 2D superconductors and topological insulators, which are insulating in the bulk yet channel spin-polarized currents on their surfaces without resistance (see Physics Today, January 2010, page 33). The research could lead to improved electronics, robust qubits for quantum computers, and even an improved understanding of the standard model of particle physics.

Vortices and the KT transition

When Thouless and Kosterlitz first collaborated in the early 1970s, the conventional wisdom was that thermal fluctuations in 2D materials precluded the emergence of ordered phases such as superconductivity. The researchers, then at the University of Birmingham in England, dismantled that argument by investigating the interactions within a 2D lattice.

Thouless and Kosterlitz considered an idealized array of spins that is cooled to nearly absolute zero. At first the system lacks enough thermal energy to create defects, which in the model take the form of localized swirling vortices. Raising the temperature spurs the development of tightly bound pairs of oppositely rotating vortices. The coherence of the entire system depends logarithmically on the separation between vortices. As the temperature rises further, more vortex pairs pop up, and the separation between partners grows.

The two scientists’ major insight came when they realized they could model the clockwise and counterclockwise vortices as positive and negative electric charges. The more pairs that form, the more interactions are disturbed by narrowly spaced vortices sitting between widely spaced ones. “Eventually, the whole thing will fly apart and you'll get spontaneous ‘ionization,’ ” Thouless told Physics Today in 2006.

That analog to ionization, in which the coherence suddenly falls off in an exponential rather than logarithmic dependence with distance, is known as the Kosterlitz–Thouless (KT) transition. (The late Russian physicist Vadim Berezinskii made a similar observation in 1970, which led some researchers to add a “B” to the transition name, but the Nobel committee notes that Berezinskii did not theorize the existence of the transition at finite temperature.)

Unlike some other phase transitions, such as the onset of ferromagnetism, no symmetry is broken. The sudden shift between order and disorder also demonstrates that superconductivity could indeed subsist in the 2D realm at temperatures below that of the KT transition. Experimenters observed the KT transition in superfluid helium-4 in 1978 and in superconducting thin films in 1981. More recently, the transition was reproduced in a flattened cloud of ultracold rubidium atoms (see Physics Today, August 2006, page 17).

A topological answer for the quantum Hall effect

Thouless then turned his attention to the quantum foundations of conductors and insulators. In 1980 German physicist Klaus von Klitzing had applied a strong magnetic field to a thin conducting film sandwiched between semiconductors. The electrons traveling within the film separated into well-organized opposing lanes of traffic along the edges (see Physics Today, June 1981, page 17). Von Klitzing had discovered the quantum Hall effect, for which he would earn the Nobel five years later.

Crucially, von Klitzing found that adjusting the strength of the magnetic field changed the conductance of his thin film only in fixed steps; the conductance was always an integer multiple of a fixed value, e2/h. That discovery proved the key for Thouless to relate the quantum Hall effect to topology, which is also based on integer steps—objects are often distinguished from each other topologically by the number of holes or nodes they possess, which is always an integer. In 1983 Thouless proposed that the electrons in von Klitzing’s experiment had formed a topological quantum fluid; the electrons’ collective behavior in that fluid, as measured by conductance, must vary in steps.

Not only did Thouless’s work explain the integer nature of the quantum Hall effect, but it also pointed the way to reproducing the phenomenon’s exotic behavior under less extreme conditions. In 1988 Haldane proposed a means for electrons to form a topological quantum fluid in the absence of a magnetic field. Twenty-five years later, researchers reported such behavior in chromium-doped (Bi,Sb)2Te3, the first observation of what is known as the quantum anomalous Hall effect.

Exploring topological materials

Around 2005, physicists began exploring the possibility of realizing topological insulators, a large family of new topological phases of matter that would exhibit the best of multiple worlds: They would robustly conduct electricity on their edges or surfaces without a magnetic field and as a bonus would divide electron traffic into lanes determined by spin. Since then experimenters have identified topological insulators in two and three dimensions, which may lead to improved electronics. Other physicists have created topological insulators that conduct sound or light, rather than electrons, on their surfaces (see Physics Today, May 2014, page 68).

Haldane’s work in the 1980s on the fractional quantum Hall effect was among the theoretical building blocks for proposals to use topologically protected excitations to build a fault-tolerant quantum computer (see Physics Today, October 2005, page 21). And his 1982 paper on magnetic chains serves as the foundation for efforts to create topologically protected excitations that behave like Majorana fermions, which are their own antiparticle. The work could lead to robust qubits for preserving the coherence of quantum information and perhaps provide particle physicists with clues as to the properties of fundamental Majorana fermions, which may or may not exist in nature.

—Andrew Grant


Read me...

Rosetta's Farewell

Rosetta's Farewell
After closely following comet 67P/Churyumov-Gerasimenko for 786 days as it rounded the Sun, the Rosetta spacecraft's controlled impact with the comet's surface was confirmed by the loss of signal from the spacecraft on September 30, 2016. One the images taken during its final descent, this high resolution view looks across the comet's stark landscape. The scene spans just over 600 meters (2,000 feet), captured when Rosetta was about 16 kilometers from the comet's surface. Rosetta's descent to the comet brought to an end the operational phase of an inspirational mission of space exploration. Rosetta deployed a lander to the surface of one of the Solar System's most primordial worlds and witnessed first hand how a comet changes when subject to the increasing intensity of the Sun's radiation. The decision to end the mission on the surface is a result of the comet's orbit now taking it to the dim reaches beyond Jupiter where there would be a lack of power to operate the spacecraft. Mission operators also faced an approaching period where the Sun would be close to line-of-sight between Earth and Rosetta, making radio communications increasingly difficult.


Read me...

Visiting Hursley House and The IBM Galileo Centre

Today I was in Hursely and I had the opportunity of spending the day at Hursely House and the IBM Galileo Centre. Very nice grounds and a very inspiring place. Judge for yourselves!

Read me...

Artificial Intelligence - Debunking Myths

Exploring around the interwebs, I came across this article by Rupert Goodwins in ArsTechnica about debunking myths about Artificial Intelligence. 

HAL 9000 in the film 2001.

It is a good read and it you have a few minutes to spare, do give it a go.

Rupert addresses the following myths:

  1. AI's makes machines that can think.
  2. AI will not be bound by human ethics.
  3. AI will get out of control
  4. Breakthroughs in AI will all happen in sudden jumps.

It is true that there are a number of effort to try to replicate (and therefore understand) human thought. Some examples include the Blue Brain project in the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. However, this does not imply that they will get immediately a machine such as HAL or C3-PO.

This is because the brain is fat more complex than the current efforts are able to simulate. As a matter of fact, even simpler brains are significantly more complex for simulation. This does not mean that we should not try to understand and learn how brains work.

Part of the problem is that it is difficult to even define what we mean by “thought”— the so called hard problem. So finding a solution to the strong AI problem is not going to be here soon, but we should definitely try.

So, once that myth is out of the way, the idea that a Terminator-like robot is around the corner is put into perspective. Sure, there are attempts at getting some self-driving cars and such but we are not quite there yet. All in all, it is true that a number of technological advances can be used for good or bad causes, and that is surely something that we all should bear in mind.

Read me...

Einstein's Amazing Theory of Gravity

Earlier this week I attended a talk by Sir Roger Penrose FRS in celebration of the 100th anniversary of the publication of Einstein´s General Theory of Relativity. The talk was entitled Einstein’s Amazing Theory of Gravity and it was sponsored by the London Mathematical Society (of which I am a proud member) and held at the Science Museum as part of the November Lates events. It also coincided with the 150th anniversary of the LMS!

Einstein General Relativity
Einstein General Relativity

Not only was the LMS and the Science Museum commemorating the centenary of the birth of Einstein’s Theory of General Relativity but other outlets were too. It may be difficult to put an actual date to Einstein's work, but we know that on November 25th, 1915 Einstein presented the “final” form of his theory to the Prussian Academy of Sciences. You can find a full translation of the paper “The Field Equations of Gravitation” here. It is interesting to note that he refers to a couple of earlier papers in that work, but the one we are referring to presents the theory in full.

During his talk, Penrose indeed talked about Relativity and I would have preferred that he concentrated on the theory per se at a more introductory level, after all it was part of a public talk in the Science Museum. He talked about black holes and did not shy talking about conformal geometries for example (bravo!). He finished his talk by presenting some of his own work regarding eons and cyclical cosmology. You can get a flavour of what he talked about in this recording of a lecture he gave in 2010.



Read me...

Life lessons from differential equations

Ten life lessons from differential equations:

  1. Some problems simply have no solution.
  2. Some problems have no simple solution.
  3. Some problems have many solutions.
  4. Determining that a solution exists may be half the work of finding it.
  5. Solutions that work well locally may blow up when extended too far.
  6. Boundary conditions are the hard part.
  7. Something that starts out as a good solution may become a very bad solution.
  8. You can fool yourself by constructing a solution where one doesn’t exist.
  9. Expand your possibilities to find a solution, then reduce them to see how good the solution is.
  10. You can sometimes do what sounds impossible by reframing your problem.

Read me...

n sweets in a bag, some are orange...

SweetsThe other day in the news there was a note about a particular question in one of the national curriculum exams... I thought it was a bit of an odd thing for a maths question to feature in the news and so I thought of having a look a the question. Here it is:

There are $latex n$ sweets in a bag.

6 of the sweets are orange.

The rest of the sweets are yellow.

Hannah takes at random a sweet form the bag. She eats the sweet.

Hannah then takes at random another sweet from the bag. She eats the sweet.

The probability that Hanna eats two orange sweets is $latex \frac{1}{3}.$

a) Show that $latex n^2-n-90=0$

It sounds like an odd question, but after giving it a bit of thought it is actually quite straightforward; and I am glad they ask something that makes you think, rather than something that is purely a mechanical calculation.

So, let's take a look: Hannah is taking sweets from the bag at random and without replacement (she eats the sweets after all). So we are told that there are 6 orange sweets, so at the beginning of the sweet-eating binge, the probability of picking an orange sweet is:

$latex \displaystyle P(\text{1 orange sweet}) = \frac{6}{n}$.

Hannah eats the sweet, remember... so in the second go at the sweets, the probability of an orange sweet is now:

$latex \displaystyle P(\text{2nd orange sweet}) = \frac{5}{n-1}$.

Now, they tell us that the probability of eating two orange sweets is $latex \frac{1}{3}$, so we have that:

$latex \displaystyle \left( \frac{6}{n} \right)\left( \frac{5}{n-1} \right)=\frac{1}{3}$,

$latex \displaystyle \frac{30}{n^2-n} =\frac{1}{3}$,

$latex \displaystyle n^2-n = 90$,

which is the expression we were looking for. Furthermore, you can then solve this quadratic equation to find that the total number of sweets in the bag is 10.

The only thing we don't know is if the sweets are just orange in colour, or also in flavour! We will have to ask Hannah!

Read me...

Shelf Life - The Tiniest Fossils

Really thrilled to continue seeing the American Museum of Natural History series Shelf Life. I blogged about this series earlier on in the year and they have kept to their word with interesting and unique instalments.

In Episode 6 we get to hear about micropaleontology, the study of fossil specimens that are so tiny you cannot see them with the naked eye. The scientist and researchers tell us about foramnifera, unicellular organisms belonging to the kingdom Protista and which go back to about 65 million years. In spite of being unicellular, they make shells! And this is indeed what makes it possible for them to become fossilised.

Interestingly enough these fossils allow us to used them as ways to tell something about ancient climate data. As Roberto Moncada pointed out to me:

According to our expert in the piece, basically every representational graph you’ve ever seen of climate/temperatures from the Earth’s past is derived from analyzing these tiny little creatures.

The Tiniest Fossils are indeed among the most important for climate research!

Read me...

2015 - International Year of Light

2015 has been declared the International Year of Light (IYL 2015) and with me being an optics geek, well, it was difficult to resist to enter a post about it. The IYL 2015 is a global initiative adopted by the United Nations to raise awareness of how optical technologies promote sustainable development and provide solutions to worldwide challenges in areas such as energy, education, communications, health, and sustainability.

There will be a number of event and programs run throughout the year and the aim of many of them is to promote public and political understanding of the central role of light in the modern world while also celebrating noteworthy anniversaries in 2015 - from the first studies of optics 1000 years ago to discoveries in optical communications that power the Internet today.

You can find further information from the well-known OSA here and check out the International Year of Light Blog.

Here are some pictures I took a couple of years ago during CLEO Europe in relationship to the International Year of Light.

Read me...

March 2015 Total Solar Eclipse

I know it is a bit late, but with the moving of the blog and all that jazz, I did not have time to post this earlier. This is a video taken by Bob Forrest, a former Specialist Technician at Bayfordbury's Observatory at the University of Hertfordshire. The video is of the Total Solar Eclipse in March 2015.



Read me...

Shelf Life - A great project at the American Museum of Natural History

I am a geek, and proudly so, and as such I have been known to visit exhibitions at the excellent Natural History Museum and the Science Museum in London, the Field Museum in Chicago, or indeed the American Museum of Natural History (AMNH). As a matter of fact, in August I did go to the AMNH and had a great time. I particularly enjoyed the Hayden Planetarium, part of the Rose Center for Earth and Space, with its iconic glass cube encasing the spherical Space Theater.

I am always in awe at the enormous number of items in the collections of these museums, cataloguing human knowledge, from taxonomy and evolution to geology and astrophysics. I was thus really intrigued when Roberto Moncada, from the AMHN sent some information about the most recent project at the museum: Shelf Life.

The AMHN has a collection with over 33 million specimens an artefacts. As it is usually the case, some of these items tell us a story about the state of knowledge at different points in human history and they range from the rare and irreplaceable to the amazing and precious. In the Shelf Life project, the museum keeps at heart its mission to share their collections and educate the public about the work that they do with the help of videos released monthly over the next year. In Episode 1, they take us inside the museum collections: "from centuries-old specimens to entirely new types of specialized collections like frozen tissues and genomic data". Episode 2 they talk to us about the art of the science of classification, taxonomy, and they way in which 33 million (plus) items get organised in the collection of the museum. Go, have a look at their shelves, you will surely find something of interest among those 33 million items!


Read me...

Mendeley goodies

I got my Mendeley t-shirt, plus a tote bang and stickers. Thanks @mendeley_com.

Mendeley Goodies


Read me...

Chris Hadfield event at the Royal Geographical Society

Chris Hadfield is speaking at the Royal Geographical Society in London as part of the Guardian Live events. I managed to get a couple of great seats to hear him speak about his book "You are here". Looking forward to seeing the images he captured while at the ISS.

Chris Hadfield Royal Geographical Society

Read me...

Black holes, gravity and film - Depicting gravitational lensing in Interstellar

Listening to the Science Magazine podcast I found out that the black hole depiction (or its effects rather) as shown in the latest film by Christopher Nolan, Interstellar, used the expertise of physicists to create the visualisations. Furthermore, the researchers used the work for the film to write an academic paper!

There are a number of things that are not as sound in the film, for instance the contrast of the efforts to free the ship from the embrace of the Earth's gravitational field, and the whizzing out from a tidal-wave-ridden planet by simply floating away... But, that is not why I wrote this post.... it was to highlight the black hole depiction... so back to the subject. In order to better depict the black hole, the film used the expertise of theoretical astro-physicist Kip Thorne, the Feynman Professor of theoretical Physics at Caltech.

Thorne Diagram

In order to produce the effect of the black hole Thorne, worked together with Double Negative in implementing the equations that would render the visual effect. However, no rendering software was able to do the rendering as they are based on the fact that outside black holes, light rays travel in a straight line. In order to show the gravitational lensing around the black hole a new renderer had to be created. The result were images that took over 100 hours to be  created. The images obtained provided Thorne with unexpected results as they showed that the light that is emitted from the accretion disk around the  black hole would have its light distorted by gravity in such a wat that a halo would apere above and below but also in front of it too. So we just have to wait for the papers to be out and read more about this. In the meantime if you are interested in finding our more about research into black holes take a look at this page.



Read me...

Quantum Tunnel Answers: Turning light into matter

Hello everybody, I am very pleased to have received a question from two people I know very well Martin del Campo and Gaby R. They have contacted Quantum Tunnel with the following question:

Breit and Wheeler proposed in 1934 that it should be possible to turn light into matter by smashing together two particles of light to create an electron and a positron – the simplest method of turning light into matter ever predicted. I would like to know if some day could be possible to demonstrate this idea?

Well Gaby and Martin, thanks a lot for the question and I bet this was triggered by the mention of Breit and Wheeler in recent weeks. As you rightly mention, Gregory Breit and John Wheeler proposed the mechanism in a Physical Review A paper entitled "Collision of two light quanta" (PRA 46, 1087; 1934). Although the mechanism was presented about 80 years ago, the difficulty in preparing the gamma rays to be collided has been a great issue to overcome. Back in 1997, researchers of the Stanford Linear Accelerator Center managed to carry out a multi-photon Breit-Wheeler process: They used a stacked way to crate the electrons and positrons using high energy photons that were generated using electrons. So the idea was to a certain extent demonstrated back in 1997, but the production of the electron-positron pair in one single shot is yet to be seen.

The difficulties have now deterred persistent physicists and I am pleased to tell you that efforts continue. In particular, the researchers Pike, Mackenroth, Hill and Rose, from Imperial College London published a paper in Nature Photonics called "A photon-photon collider in a vacuum hohlraum" (Nat. Phot. 8, 434; 2014). By the way a Hohleaum is a cavity whose walls are in radiative equilibrium with the radiant energy within it, the word comes from German and it means "cavity" or "hollow room". In their paper they present the design of a novel photon–photon collider where a gamma-ray beam is fired into a hohlraum. The theoretical simulations in the paper  suggest that this setup can produce about 105 Breit–Wheeler pairs in a single shot. If that is the case, the setup could provide the trail to a first realisation of a photon–photon collider and demonstrate the mechanism that Breit and Wheeler talked about.Photon Collider

Read me...

Quantum Tunnel Answers: Fresnel Lens

Hello everyone,

once again we have a question coming to the inbox of the Quantum Tunnel blog. If you are interested in asking a question, please feel free to get in touch using this page. We have once again a question by a very avid reader, let's take a look:

Dear Quantum Tunnel,

Could you please explain how Fresnel lenses work? I am asking after listening to Dr Carlos Macías-Romero talking in one of the Quantum Tunnel podcasts. Thanks a lot.


Hello yet again Pablo, thanks a lot for your question. Well, I assume that you are familiar with the idea of a lens and that you may even wear a pair of spectacles or know someone who does and so you know that you can correct, among other things, the focal point and thus read your favourite blog (the Quantum Tunnel site of course!) with trouble.

Well, have you ever had a chance to go and see a lighthouse close enough? But not just the building, the actual place where the light is beamed out to see? If so you may have seen the lenses they use. If not, take a look the image here:

Lighthouse Lens
Lighthouse Lens (Photo credit: Wikipedia)

You can see how the lens is made out of various concentric layers of material and the design allows us to construct lenses that otherwise would be way to thick and therefore heavier. A lighthouse requires a light beam that uses a large aperture but a short focal length and a Fresnel lens offers exactly that without the need of a really thick lens. Fresnel lenses are named after the French physicist Augustin-Jean Fresnel.

Another example of Fresnel lenses are flat magnifying glasses such as the one shown below, you can see that they are effectively flat and no need to use one such as those used by Sherlock Holmes...

English: Creditcard-size Fresnel magnifier Ned...
English: Creditcard-size Fresnel magnifier Nederlands: Fresnelloep in creditcardformaat (Photo credit: Wikipedia)

The design of a Fresnel lens allows it to capture more oblique light from a light source. Remember that a lens works by refracting (bending) the light and the way in which the "layering" in the Fresnel lens helps with the refraction needed. See the diagram below:

Fresnel lens

A couple of other uses for these lenses are in overhead projectors and the headlights of cars. So next time you attend or give a lecture or drive at night, think of Monsieur Fresnel.

Read me...

Quantum Tunnel Answers - Interest in Quantum Physics

This time is not really a question that has arrived to the Quantum Tunnel mailbox, it is rather an observation and some cheers. Let's take a look:

Dear Quantum Tunnel,

I have listened to all the available Quantum Tunnel podcasts in Spanish, the content is great and the news are cool. I am interested in understanding more about quantum theory and in my experience there is no a lot of information at my level that does not make it all sounds like philosophy or even a bad example. In most cases the explanations start up assuming that one does understand the "quantum concepts". With those limitations, I am afraid to admit that I actually fail to see the genius of Einstein. Having said that I refuse to think that after I am unable to understand ideas that are thought in universities. Surely some explanations do not start with "time is relative". If thousands can understand it, so can I.

Pablo Mitlanian

Hello again Pablo, I agree with you that there is a lot of information out there that either assumes too much, or simply exploits the concepts for non-scientific purposes. You are right, I am sure you can understand the intricacies of quantum-mechanical phenomena, but bear in mind the words of Richard Feynman "I think I can safely say that nobody understands quantum mechanics".  I would not expect someone to become a quantum physicist without the appropriate training, in the same way we cannot all perform a heart transplant without studying medicine and practicing. That doesn't mean we can't change careers though!

If you want to learn quantum theory in ten minutes, take a look at the blog post that the Quantum Pontiff blog posted a few years back. Yes, there are ducks and turkeys, but then again they promised to explain in 10 minutes. There are nonetheless a few things that can serve as building blocks to achieve your goal:

  1. Learn about classical physics (yes, the courses on mechanics that you probably took in high school, exactly those). A good understanding of this will highlight those non-intuitive results from the quantum world.
  2. Understand how to describe the behaviour of particles and of waves (I guess this is part of number 1 above, so just stressing the point!)
  3. Make sure you are well versed in the use of probability (yes, I am saying that you need to revise some mathematics!)
  4. Be patient!

It all that works, perhaps consider enrolling at your local University to read physics, you never know you make the next discovery in physics. Incidentally, within your revision make sure you understand that relativity theory (general or special) is completely decoupled from quantum theory. As a matter of fact, joining the two is one of the biggest challenges in physics today.

If you want to ask a question to Quantum Tunnel use the form here.

Read me...

Quantum Tunnel answers: Solar flares

I am very pleased that the first (of many I hope) questions has arrived to the mailbox of the Quantum Tunnel blog. So here we go:

Dear Quantum Tunnel:

Is it true that there will be a solar storm in December (2014) and there will be three days of darkness? If so, why is this happening?

Yours sincerely,

Pablo Mitlanian

Well, thanks a lot for your question Pablo. Let me first start by clearing the air and respond directly to the question: No, it is not true that there will be a solar storm that will cause three days of darkness. So there you go! I think this is a rumor that has been going around the interwebs for quite some time. Neither NASA nor any other respected scientific institution has made such a claim.

Now, let us address the actual facts related to the question: solar storms do indeed exist and they usually refer to sudden release of energy from the surface of the Sun, we are talking about $latex 6times 10^{25}$ Joules. To put this in perspective, the impact in Chicxulub (Mexico) that caused the mass extinction of the dinosaurs is around $latex 1times 10^8$ Joules. Solar flares are sometimes followed by the ejection of plasma from the upper atmosphere of the Sun (called solar wind) and accompanying magnetic fields. The particles that make up the solar wind (electrons, ions and atoms) reach the Earth one or two days after the event. Incidentally, the charged particles hitting the magnetosphere are the reason for beautiful auroras!

Magnificent CME Erupts on the Sun - August 31


As you can imagine, solar flares have a definite impact on space weather locally, and thus on the Earth too. The particles from the solar wind can impact with the Earths magnetosphere and present some hazard to spacecraft and satellites and in some cases affect the terrestrial electric power grids. One of the most powerful solar flares observed was recorded in 1859 by Richard Carrington and, independently, Richard Hodgson and the auroras could be seen even in Cuba and Hawaii!

The Sun's magnetic activity has been observed to follow a periodic cycle of about 11 years and on a maximum there are more solar flares. The last maximum was in 2000 and we were thus expecting a maximum around 2011, but as with other weather (terrestrial or not) predictions, there is a margin of error. So, I am sure you can go around doing your end of year celebrations without worrying about solar flares and who knows you may even have a chance to see a charming aurora!



If you want to ask a question to Quantum Tunnel use the form here.

Related Articles

'Extreme solar storm' could have pulled the plug on Earth - The Guardian

Read me...

Science is a creative process

I read this article in Newsweek and had to share it with you. Go and read it! Here is a brief extract:

I wanted to get things in perspective: If law students had to spend five or six years in school, think up a novel law and get i t passed. then their training would resemble that of a biology Ph.D. If a med student had to invent and test a new treatment for patients - and prove it successful - before being awarded an M.D., ditto. If my students remember nothing else, I'd be happy if they leave with the idea that, just like art or music, science is a creative process.

Biology PhD




Read me...

Stochastic Calculus and Differential Equations for Physics and Finance

Review of Stochastic Calculus and Differential Equations for Physics and Finance, by Joseph L. McCauley

Download a free copy of the review here.

Stochastic Calculus

Read me...

Nature Materials: Focus Editorial Mexico

Back in October 2010, Nature Materials published a Focus Editorial on Mexico in which I contributed with Joerg Heber. You can find information about it here:

Mexico is a country rich with natural resources and an educated workforce. Yet its scientific output remains below its potential. In a focus issue we highlight some of Mexico's structural problems.

Read me...

Magnetic Resonance Imaging (MRI)


my brains - let me show you themA few days ago I received a request from a reader (Thanks Paulo Zan) for information about magnetic resonance. The request did not specify more than that so I took the liberty of deciding that perhaps the request was due to coming across the term before and as such it is quite possible that a lot of us would have heard of magnetic resonance in the context of MRI scans.

Well, an MRI scan stands for "Magnetic Resonance Imaging" Scan and it is a widely-used technique to obtain images of the brain. The full name of the imaging technique is actually nuclear magnetic resonance imaging but it seems that the first word in that mouthful is sometimes avoided as it may have negative connotations for some. Other names include MRT or magnetic resonance tomography. MRI scanners use strong magnetic fields and radio waves to form images of the body.

As I mentioned above, the full name should include the word nuclear because the physical phenomenon exploited by the scanner is actually the absorption and emission of electromagnetic radiation by nuclei in a strong magnetic field. The absorption and emission of energy related to the frequency of the radiation in question and depending on the properties of the atoms, certain frequencies cause larger oscillations.Those frequencies are called resonance frequencies. An important feature of the phenomenon is that the resonance frequency of a particular substance is directly proportional to the strength of the applied magnetic field. In an MRI scanner it is this property the one that enables the imaging: if a sample is subjected to a non-uniform magnetic field then the resonance frequencies of the nuclei that make up the sample depend on where in the field they are located. The resolution of the images obtained depend on the  magnitude of magnetic field gradient and this strong fields are required. In a lot of scanners the detectors pick up radio signals emitted by excited hydrogen atoms in the body (remember that water is 2 parts hydrogen and 1 part oxygen). Due to the use of large magnets in the machines, the patients are required not to carry out metallic objects while in the same room as the scanner.

Read me...

Imágenes con Resonancia Magnética (MRI)

Magnetic Resonance Imaging scan of a head.

Hace unos días recibí una petición de un lector (Gracias Paulo Zan) para obtener información acerca de la resonancia magnética. La solicitud no especificó más que eso por lo que me tomé la libertad de decidir que tal petición debió haber iniciado al ver estos términos antes y, como tal, es muy posible que muchos de nosotros hemos oído hablar de la resonancia magnética en el contexto de la escáners de resonancia magnética (MRI por sus siglas en Inglés).

Bueno, la resonancia magnética es sinónimo de escaneo por Resonancia Magnética (Magnetic Resonance Imaging Scan) y es una técnica ampliamente utilizada para obtener imágenes del cerebro. El nombre completo de la técnica es en realidad "imágenes de resonancia magnética nuclear", pero parece que la última palabra en ese bocado se evita a veces ya que puede tener connotaciones negativas para algunos. Otros nombres incluyen MRT o la tomografía por resonancia magnética. Escáneres de resonancia magnética utilizan fuertes campos magnéticos y ondas de radio para formar imágenes del cuerpo.

Como he mencionado anteriormente, el nombre completo debe incluir la palabra nuclear porque el fenómeno físico explotado por el escáner es en realidad la absorción y emisión de radiación electromagnética por los núcleos de átomos en un campo magnético fuerte. La absorción y emisión de la energía está relacionada con la frecuencia de la radiación en cuestión y en función de las propiedades de los átomos ciertas frecuencias causan oscilaciones más grandes. A estas frecuencias las llamamos frecuencias de resonancia. Una característica importante del fenómeno es que la frecuencia de resonancia de una sustancia particular es directamente proporcional a la fuerza del campo magnético aplicado. En un escáner de resonancia magnética es esta propiedad la que permite la formación de imágenes: si una muestra se somete a un campo magnético no uniforme, las frecuencias de resonancia de los núcleos que componen la muestra dependen del lugar  en que se encuentran dentro del campo. La resolución de las imágenes obtenidas depende de la magnitud del gradiente de campo magnético y por tanto fuertes campos son obligatorios. En la mayor parte de los escáneres los detectores captan señales de radio emitidas por átomos de hidrógeno excitados en el cuerpo (recuerde que el agua es de 2 partes de hidrógeno y 1 parte de oxígeno). Debido al uso de grandes imanes en los escáners, se requiere que los pacientes que no lleven consigo objetos metálicos.

Read me...

Science is beautiful exhibition

When I first heard about the plans that the British Library had about an exhibitions called Science is Beautiful I got very excited. I did even make an entry in my diary about the date that it was planned to be opened. Closer to the time I even encourage Twitter followers and colleagues to go to the exhibition.

lorence Nightingale's "rose diagram", showing the Causes of Mortality in the Army in the East, 1858. Photograph: /British Library
lorence Nightingale's "rose diagram", showing the Causes of Mortality in the Army in the East, 1858. Photograph: /British Library

The exhibition promised to explore how "our understanding of ourselves and our planet has evolved alongside our ability to represent, graph and map the mass data of the time." So I finally made some time and made it to the British Library today... the exhibition was indeed there with some nice looking maps and graphics, but I could not help feeling utterly disappointed. I was very surprised they even call this an exhibition, the very few images, documents and interactive displays were very few and not very immersive. Probably my favourite part was looking at "The Pedigree of Man" and the "Nightingale's Rose" together with an interactive show. Nonetheless, I felt that the British Library could have done a much better job given the wealth of documents they surely have at hand. Besides, the technology used to support the exhibits was not that great... for example the touch screens were not very responsive and did not add much to the presentation.

Sadly I cannot really longer recommend visiting the stands, and I feel that you are better off looking a the images that the Guardian has put together in their DataBlog, and complement with the video that Nature has made available. You can also read the review that Rebekah Higgitt wrote for the Guardian.


Enhanced by Zemanta
Read me...

Anti-atom beam

It may sound like a line from Star Trek, but I can assure you that the creation of a beam made out of anti-hydrogen atoms is a real achievement carried out by scientists at CERN.

The work was reported in Nature Communications, and it could hopefully help answering the question about the patent lack of anti-matter we see on everyday life. In order to study anti-matter we would need a source of them, plus the anti-particles should live long enough to make useful measurements. 

It is not that anti-matter is not currently used, PET scans routinely employ positrons to take snapshots of patients bodies. But the prospect of having proper anti-matter atoms became a reality only about three years ago.  The recent announcement from the ASACUSA collaboration at CERN  Now scientists from a different collaboration at CERN, report the creation of a beam of anti-hydrogen atoms that can be measured more precisely outside the magnetic trap where they were created. At least 80 of the anti-atoms were detected, 2.7 meters (9 feet) downstream of the production region.

Read me...

Essential MATLAB and Octave

As probably some of you know, I am currently writing a book about MATLAB and Octave focussed at new comers to both programming and the MATLAB/Octave environments. The book is tentatively entitled "Essential MATLAB and Octave" and I am getting closer and closer to getting the text finished. The next step is preparing exercises and finalising things. My publisher, CRC Press, has been great and I hope the book does well.

I'm aiming to finish things by May and in principle the book will be available from Novemeber or so. The whole process does take a while but I am really looking forward to seeing the finished thing out there.

So, what triggered this post? Well, I have seen the appearance of a site with the book announced. I am not sure if these are usual practices but in any case it is a good thing, don't you think?




Read me...

Harvesting magnetic fields...

A few days ago I got a message from my mate Jorge Soto... always great to hear from him, particularly with New Year wishes and even better with an interesting question:

Jorge Soto

The question is related to the conversion of magnetic energy into electrical one and whether the process can be achieved in places such as the Van Allen radiation belt.

So, lets us take this by parts: First the magnetic to electric energy conversion. Well, according to the first law of thermodynamics energy cannot be "created or destroyed", but we can indeed convert it from one from to another one. It turns out that we can use some kinetic energy to move, say, a magnet. In turn this kinetic energy can be converted to electrical energy thanks to the properties of electromagnetism, in particular to the so-called Faraday's law. Faraday discovered that, when moving a permanent magnet into and out of a coil of wire, an electrical current was induced in the wire while the magnet was in motion.

Now, to the Van Allen radiation belt: the belt is part of the Earth's magnetosphere. Ok, ok... The magnetosphere is the part of space near a celestial object in which charged particles are controlled by the magnetic field generated by the object itself. So the Van Allen belts extend from an altitude of about 1,000 to 60,000 kilometers above the surface in which region radiation levels vary. In order to convert magnetic energy to electrical, as mentioned above, we requiere the magnetic field to be in movement or vary. It is generally accepted that in that context, the Earth is effectively a permanent magnet and thus to generate electric power from that, you have to move electric conductors (wires) thought the  field in the right direction and with the right orientation of the conductor. Not an easy task...

However, one can perhaps take advantage of the variations of the magnetic field. In Nature 439, 799-801 (16 February 2006) it has been reported that

"... Earth's magnetic field is weak: it varies from about 25 microtesla (muT) at the Equator to 75 muT at the poles, with geomagnetic field lines inclined, in Europe and North America, at an angle of about 60° to the (horizontal) surface. The field is not constant: currents in the ionosphere and disturbances from Earth's interior produce slow daily variations in the field with amplitudes of some 25 nanotesla (nT), and superimposed on these are further oscillations with periods of a few seconds and amplitudes of about 1 nT."

Using the very crude approximation that there are variations of 1nT per second, and take a circular area of with radius of 1 metre we would end up with a voltage of $latex pi times 10^{-9}$ Volts or approximately 3.1415 nano volts. Or in other terms we would get about 3 one-billionth's of a volt per square meter of flux... probably not a lot of usable energy and thus maybe not that cost effective.

Read me...

Wishful thinking or The Misuse of Maths in Psychology

The misuse of maths in psychology

"Think positively!" - a seemingly innocuous remark you might hear every so often… you might have even read it in one of those self-help books, or even from renowned psychologists of “positivity” such as Barbara Fredrickson. In 2005, Fredrickson and her colleague Marcial Losada published a paper in “American Psychologist” in which they calculate a “positivity ratio” using Lorenz equations.

In the paper, the authors mention that positivity ratios above 2.9013 are related to “flourishing mental health”. It turns out the this paper has recently been refuted and even partially withdrawn thanks to the judicious eye of Nicholas Brown, a part time graduate student from the University of East London who was able to see through the great misuse of mathematics. Brown was supported by Alan Sokal, an outspoken critic of postmodernism and professor of physics at New York University; and Harris L Friedman a clinical psychologist from Saybrook University and the University of Florida. Their paper is entitled “The Complex Dynamics of Wishful Thinking: The Critical Positivity Ratio.”

The Observer newspaper mentions that Fredrickson and Losada were given the opportunity of responding to the refutal… Only Fredrickson took the opportunity up. According to the Observer

“She effectively accepted that Losada's maths was wrong and admitted that she never really understood it anyway. But she refused to accept that the rest of the research was flawed.”

I guess is still the positive thinking that may be helping her…

It is great to see that the scientific process does work, unfortunately I am sure that the “positive ratio” pushers will continue to exploit the situation.

Read me...

IoP Talk

IoP Talk announced:
Thoery, model and simulation: An involved association
Jan 29th 2014, 19.00-20.00
iop talk



Read me...

Google Doodle - Carlos Juan Finlay

Born on December 3rd 180 years ago, Carlos Juan Finlay is the man who came up with the theory that yellow fever was spread by mosquitoes. Glad to see that a Google Doodle can help with letting people know about this important Cuban scientist.

Finlay's research on cholera and yellow fever didn't initially get much support. He suggested that yellow fever was carried by mosquitos and he suggested that cholera was waterborne. His work was proven later by the Walter Reed Commission and in 1902 Finlay became the chief health officer in Cuba. This confirmation paved the way for the eradication of yellow fever, creating the chance to save thousands of lives.



carlos juan finlay

Read me...

The Way of All Flesh

The Way of All Flesh by Adam Curtis: a one-hour BBC documentary on Henrietta Lacks and HeLa directed by Adam Curtis. It won the Best Science and Nature Documentary at the San Francisco International Film Festival. Immediately following the film's airing in 1997, an article on HeLa cells, Lacks, and her family was published by reporter Jacques Kelly in The Baltimore Sun.




Read me...

What Science's "Sting Operation" reveals - reblog

This is a re-blog of "What Science's "Sting Operation" Reveals" by Kausik Datta in Scilogs.

What Science’s “Sting Operation” Reveals: Open Access Fiasco or Peer Review Hellhole?

4 October 2013 by Kausik Datta,

The science-associated blogosphere and Twitterverse were abuzz today with the news of a Gotcha! story published in today's Science, the premier science publication from the American Association for Advancement of Science. Reporter John Bohannon, working for Science, fabricated a completely fictitious research paper detailing the purported "anti-cancer properties of a substance extracted from a lichen", and submitted it under an assumed name to no less than 304 Open Access journals all over the world, over a course of 10 months. He notes:

... it should have been promptly rejected. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.

Nevertheless, 157 journals, out of the 255 that provided a decision to the author'snom de guerre, accepted the paper. As Bohannon indicates:

Acceptance was the norm, not the exception. The paper was accepted by journals hosted by industry titans Sage and Elsevier (Note: Bohannon also mentions Wolters Kluwer in the report). The paper was accepted by journals published by prestigious academic institutions such as Kobe University in Japan. It was accepted by scholarly society journals. It was even accepted by journals for which the paper's topic was utterly inappropriate, such as the Journal of Experimental & Clinical Assisted Reproduction.

This operation, termed a 'sting' in Bohannon's story, ostensibly tested the weaknesses, especially poor quality control exercised, of the Peer Review system of the Open Access publishing process. Bohannon chose only those journals which adhered to the standard Open Access model, the author pays if the paper is published. When a journal accepted either the original, or a revised (superficially, retaining all the fatal flaws) version, Bohannon sent an email requesting to withdraw the paper citing a 'serious flaw' in the experiment which 'invalidates the conclusion'. Bohannon notes that about 60% of the final decisions appeared to have been made with no apparent sign of any peer review; that the acceptance rate was 70% after review, only 12% of which identified any scientific flaws - and about half of them were nevertheless accepted by editorial discretion despite bad reviews.

As noted by some scientists and Open Access publishers like Hindawi whose journals rejected the submission, the poor quality control evinced by this sting is not directly attributable to the Open Access model. A scientific journal that doesn't perform peer review or does a shoddy job of it is critically detrimental to overall ethos of scientific publishing and actively undermines the process and credibility of scientific research and the communication of the observations thereof, regardless of whether the journal is Open Access or Pay-for-Play.

And that is one of the major criticisms of this report. Wrote Michael B Eisen, UC Berkeley Professor and co-founder of the Public Library of Science (PLoS; incidentally, the premier Open Access journal PLOS One was one of the few to flag the ethical flaws in, as well as reject, the submission) in his blog today:

... it’s nuts to construe this as a problem unique to open access publishing, if for no other reason than the study didn’t do the control of submitting the same paper to subscription-based publishers [...] We obviously don’t know what subscription journals would have done with this paper, but there is every reason to believe that a large number of them would also have accepted the paper [...] Like OA journals, a lot of subscription-based journals have businesses based on accepting lots of papers with little regard to their importance or even validity...

I agree. This report cannot highlight any kind of comparison between Open Access and subscription-based journals. The shock-and-horror comes only if one places a priori Open Access journals on a hallowed pedestal for no good reason. For me, one aspect of the revealed deplorable picture stood out in particular - the question: Are all Open Access Journals created equal? The answer to that would seem to be an obvious 'No', especially given the outcome of this sting. But then it would beg the follow-up question, if this had indeed been a serious and genuine paper, would the author (in this case, Bohannon) seek out obscure OA journals for publishing it?

As I commented on Prof. Eisen's blog, rather than criticizing the Open Access model, the most obvious solution to ameliorate this kind of situation seems to be to institute a measure of quality assessment for Open Access journals. I am not an expert in the publishing business, but surely some kind of reasonable and workable metric can be worked out in the same way Thomson Reuters did all those years ago for Pay-for-Play journals? Dr. Eva Amsen of the Faculty of 1000 (and an erstwhile blog colleague at Nature Blogs) pointed out in reply that a simple solution would be to quality control for peer review via an Open Peer Review process. She wrote:

... This same issue of Science features an interview with Vitek Tracz, about F1000Research’s open peer review system. We include all peer reviewer names and their comments with all papers, so you can see exactly who looked at a paper and what they said.

Prof. Eisen, a passionate proponent of the Open Access system and someone who has been trying for a long time to reform the scientific publishing industryfrom within, agrees that more than a "repudiation [of the Open Access model] for enabling fraud", what this report reveals is the disturbing lesson that the Peer Review system, as currently exists, is broken. He wrote:

... the lesson people should take home from this story not that open access is bad, but that peer review is a joke. If a nakedly bogus paper is able to get through journals that actually peer reviewed it, think about how many legitimate, but deeply flawed, papers must also get through. [...] there has been a lot of smoke lately about the “reproducibility” problem in biomedical science, in which people have found that a majority of published papers report facts that turn out not to be true. This all adds up to showing that peer review simply doesn’t work. [...] There are deep problems with science publishing. But the way to fix this is not to curtain open access publishing. It is to fix peer review.

I couldn't agree more. Even those who swear by peer review must acknowledge that the peer review system, as it exists now, is not a magic wand that can separate the grain from the chaff by a simple touch. I mean, look at the thriving Elsevier Journal Homeopathy, allegedly peer reviewed... Has that ever stemmed the bilge it churns out on a regular basis?

But the other question that really, really bothers me is more fundamental: As Bohannon notes, "about one-third of the journals targeted in this sting are based in India — overtly or as revealed by the location of editors and bank accounts — making it the world's largest base for open-access publishing; and among the India-based journals in my sample, 64 accepted the fatally flawed papers and only 15 rejected it."

Yikes! How and when did India become this haven for dubious, low quality Open-Access publishing? (For the context, see this interactive map of the sting.)

Read me...

Hawthorne Effect

School meals
School meals (Photo credit: Coventry City Council)

I was listening last week to the "More or Less" podcast with Tim Harford, which by the way is one of my favourite Radio 4 programmes and I highly recommend it. In the programme they were discussing the proposal of Mr Nick Clegg, the UK's Deputy Prime Minister, to offer free school lunches to all pupils at infant schools. The proposal follows from a pilot study  that seemed to suggest that giving free meals to school children was good for their academic performance.

As usual, not all is what it seems and the programme goes on to discuss this. I'm afraid is the old adage of correlation and causation... In any case, the commentators in the programme made a reference to the Hawthorne effect, and although Tim Harford mentioned something about this I ended up with the curiosity to find out more about it. It turns out that the Hawthorne effect is at work when subjects modify and change their behaviour in response to the fact that they know they are being studied. You might think that this is similar to the quantum mechanical observer affecting the system they observe, except that in this case the system is patently aware of the influence of the observation. I would leave it at that...

The effect is named after Western Electric’s  Hawthorne Works in Cicero, Il somewhere close to Chicago. Between 1924 and 1932 Elton Mayo carried out some productivity trials that have become some of the most well-know in social science, as the study is often held as a demonstration that people respond to change when they know you they are being observed or studied. So, who knows, perhaps the pupils, parents and teachers did indeed change their behaviour while the study was taking place... Oh well...

Read me...

Electromagnetism redefined?

I have finally had some time to catch up with the brand new Observer Tech Monthly magazine, a very welcomed addition to the fine Guardian and Observer newspapers. So, there I was, reading about Paul Mason and his tech, and how the body clock works. So, after a turn of the page I find an article by Alok Jha explaining Maxwell's Equations and how they electrified the world. All great, except... except... well... except the equations they framed (as expected written with chalk on a blackboard) are incorrect. OK, at least one of them is incorrect , but that it enough to redefine the entire electromagnetic theory.

Observer Maxwell Equations

They have started by showing the equations for the case of a region with no charges ($latex \rho = 0$) and no currents ($latex J = 0$), such as in a vacuum. The correct set of Maxwell's equations reduce in that case to:

  • $latex \nabla \cdot {\bf E}=0$
  • $latex \nabla\cdot {\bf B}=0$
  • $latex \nabla\times {\bf E}=-\frac{\partial {\bf B}}{\partial t}$
  • $latex \nabla\times {\bf B}=\frac{1}{c^2}\frac{\partial {\bf E}}{\partial t}$

I have used the notation $latex {\bf B}$ for the magnetic field... In any case, note the last two equations I wrote above. Can you see the difference between them and the ones depicted in the newspaper article? I wonder what sort of electromagnetic phenomena could be observed by the redefined equations in the Observer... who knows perhaps that is the way electromagnetic fields behave in another Universe, but not on this one.


Read me...

Léon Foucault celebrated in a Google Doodle

If you encountered a pendulum going round in the Google page this morning it is because the Google Doodle is celebrating the birthday of Jean Bernard Léon Foucault, a French physicist and inventor of a pendulum that demonstrated the rotation of the earth.

Among other things he is credited with making an early measurement of the speed of light. Foucault was born in Paris in 1819, where he initially studied medicine but soon switched to physics (hurray!). He demonstrated his 67-metre, 28kg pendulum at the Panthéon in Paris in 1851. The plane of motion of the pendulum with respect to the earth, rotated slowly clockwise.


Foucault pendulum

Read me...

Eigenvectors and Eigenvalues

I was talking to some students the other day (actually... a couple of months ago... ahem...), they had some questions about some problems on linear algebra and after a short while it became clear that they had mastered some of the techniques to deal with matrices and transformations, but sadly they had no idea about some of important concepts. The discussion moved into what the importance was for Eigenvectors and thus Eigenvalues. They could not answer, other than... "the way to calculate the Eigenvalue is...". So I decided to do an entry here about why we are interested in these things (other than to pass the exam...).

Let me start by the origin and meaning of the word Eigen: it comes from German and it is a prefix that can be translated as "proper", "own", "particular". That perhaps hints at the mathematical meaning, which could be even translated as "characteristic", which was first used by David Hilbert (I believe...). Some times Eigenvectors are thus called "Proper Vectors" although that is not my personal preference.

English: Linear transformation by a given matrix
English: Linear transformation by a given matrix (Photo credit: Wikipedia)

If we consider a collection of numbers arranged in $latex n$ rows and $latex n$ columns, i.e. a square matrix that we will call $latex \bf{A}$. Let us also consider a column vector $latex \bf x$ with $latex n$ non-zero elements. We can therefore carry out the matrix multiplication $latex \bf{Ax}$. Now we raise the following question: Is there a number $latex \lambda$ such that the multiplication $latex \lambda \bf x$ gives us the same result as $latex \bf Ax$. In other words: $latex \bf{Ax} = \lambda \bf x$, if so, then we say that $latex \lambda$ is an Eigenvalue of $latex \bf A$ and $latex \bf x$ is the Eigenvector. Great! That part is fine and we can compute these quantities, but why are we interested in this? Well, it turns out that many applications in science and engineering rely on linear transformations, which in turn use Eigenvectors and Eigenvalues. A linear transformation is a function between two vector spaces that preserves the operations of addition and scalar multiplication. In simpler terms, a linear transformation takes, for example, straight lines into straight lines or to a single point, and they can be used to elucidate how to stretch or rotate an object, and that is indeed useful.

So, where do Eigenvectors and Eigenvalues come into place? Well, they make linear transformations easier to understand. Eigenvectors can be seen as the "directions" along which a linear transformation stretches (or compresses), or flips an object, whereas Eigenvalues are effectively the factors by which such changes occur. In that way, Eigenvalues characterise important properties of linear transformations, for example whether a system of linear equations has a unique solution, and as described above, it can also describe the physical properties of a mathematical model.

Do you want a concrete example in which this is used on daily life? Well, have a look at PageRank used by Google...

Read me...

Backwards and Forwards in Time

time-warpTime flies, time is money, time is a wise counsellor, time is relative, time is... very hard to define. Paraphrasing St Augustine I can say that  I know what time is if no one asks me, but if I try to explain it I simply do not know. It seems to be very natural to acknowledge the passing of time, however when we take a moment to think about its meaning, we quickly find ourselves with a few problems.
We start by arguing that time can be defined by the interval between two successive events and thus we need a ruler to measure that interval. This is indeed a quest that us humans have pursued since the dawn of civilisation; it is very easy to see how the definition of day comes about: it is the interval between two successive sunrises. Once we have this in place a lot follows effortlessly: on the one hand we can start taking smaller intervals and define hours, minutes, seconds, and on the other, it is now possible to refer to events taking place in the past, the present and even the future. The ordering of these three concepts is intuitive as time flows from the past to the future, and we even see it manifested in the objects around us. We can imagine that we go to a museum where a film installation is being shown. The film starts with a large red stain in an otherwise immaculately white carpet. The camera spans and we see some pieces of glass strangely being attracted to each other while the red stain starts to shrink. The next thing we see is a wine glass appear before our eyes and wine droplets jump into it as if by magic. It is immediately obvious that the film was played in reverse as there seems to be a natural “forward direction”. This directionality is often referred to as the arrow of time and whenever it is discussed the subject of causality arises, and even time travel.

When I mention causality I am referring to the relationship between causes and effects; in the case of the film I used as an example, the cause of the spill is shown to us as artist hits the wine glass. When the film is shown in reverse, we tangibly notice that there is something missing: the glass cannot "unbreak" out of its own accord. What does physics have to say about this? If we were to analyse the film using the laws of motion described by Newton, we would find that there is no difference between the forward and backward directions. In other words, time reversal is not prohibited anywhere in Newtonian mechanics. This means that, given a present state under specific conditions, we are therefore able to predict the future, but also retrodict the past, as there is no distinction between the two. This sounds surprising as this sort of thing does not happen in our daily lives.

Scientists have come up with their own versions of the wine glass film described above. In one case, they have taken two particles of light, known as photons, with certain energies and mashed them together; after the collision they observed a pion and a deuteron as a result of the collision. Do not be too concerned about what these two new particles are, this will not affect the discussion. When the film is reversed, it shows a pion and a deuteron colliding and producing as a result two photons. This new experiment has been realised and lo and behold the physicists observed the generation of the two photons as predicted, giving them a confirmation that the laws that govern these phenomena do not change when time is reversed. As you may have noticed, we have blatantly ignored the present, and this is because we think of it as a transitory state between the past and the future. In other words, the past is gone while the future has not arrived, and the ephemeral present expires as soon as we try thinking of it.

From this point of view, the result of these experiments seems to indicate that the arrow of time is embedded in our perception. It has been argued that the arrow of time is a psychological effect, and that this feeling that time flows mercilessly from the past to the future is all subjective. Let us take these arguments a step further, if indeed there is no difference between past and future, then there is nothing stopping us from travelling to the future (as we imminently do) or to the past (as we clearly are not). Believe it or not, but physics has something to tell us about this. I mentioned above that time reversal is allowed by Newtonian mechanics, so why can we not put together again the wine glass by time reversing the process, rather than supergluing the broken pieces? The answer is not in the realm of mechanics, but in that of thermodynamics, in other words the study of how energy converts between heat and other forms of energy. In that manner, physicists also talk about a thermodynamic arrow of time, in the sense that a given physical system invariably becomes ever more disordered, and since disorder is therefore important we quantify it with a quantity called entropy. This rule that tells us that entropy increases with time is known as the second law of thermodynamics. Following this line of thought, we are not allowed to fix our broken wine glass by running time backwards because it would imply going from a more disordered state to a more ordered one without using any extra energy, and so travelling to the past is not an easy task to achieve.
time_1920x1200What about travelling to the future, or in the direction pointed by entropy? Well, in that case there is certainly nothing that stops us in our tracks. In fact, as I pointed out earlier on, we are already travelling to the future, and we do that at a pace of sixty minutes an hour. However, if we wanted to travel to the future at a different rate, Einstein's theory of relativity gives us a recipe to achieve this. In the so-called special theory of relativity the world has four dimensions: the usual three space dimensions that we know and love, i.e.. length, width and height; and one dimension that is related to time. In other words, when you walk from one place to another in the gallery where the wine glass video is being shown, you automatically change your position on the time coordinate, even if you don't notice. Einstein tells us that if we were to travel at the speed of light, time expands from the perspective of a stationary observer, whereas space contracts from the point of view of the moving person. This brings into question the notion of simultaneity, as two events that seem to happen at the same time for the stationary person, could in principle happen at different times from the point of view of the moving person. It is fascinating to compare Einstein’s efforts to unravel the secrets of simultaneity in time, with Picasso’s cubism to depict simultaneity in space. The effect of time dilation has been experimentally confirmed with very precise caesium clocks. Unfortunately, it is completely outside of human experience, because we have not yet devised a way of travelling at speeds where relativistic effects become noticeable. Even if we were to spend our entire lives in a plane that moves at supersonic speed, we would barely win a second over our contemporaries on the ground.

So, time travel as presented to us in sci-fi films is not yet possible but that has not stopped us from imagining its consequences. As for the definition of time, I am sure that there are many other things that can be said on the subject. Unfortunately, time is a merciless master, and that is all the time and space I have for now.

Dr Jesús Rogel-Salazar
(originally appeared in Artesian : Issue Three : Time : 2011)

Read me...

Yuletide Disquisitions?? Come on!

I twitted earlier this week an Open Letter to the Royal Institution by Ian Gent in response to the bizarre and quite frankly ludicrous decision by the RI to trademark the term "Christmas Lectures". I agree with the points made by Ian as well as others (see here and here). I find it quite offensive to the scientific and science communication communities to make it illegal to use the term Christmas Lecture if you happen to organise an event where a lecture will be given during the Christmas period... I suppose people will have to start organising Yuletide Disquisitions...

Detail of a lithograph of Michael Faraday deli...
Detail of a lithograph of Michael Faraday delivering a Christmas lecture at the Royal Institution (Photo credit: Wikipedia)
Read me...

Listen as Albert Einstein Reads ‘The Common Language of Science’ 1941 | Open Culture

Listen as Albert Einstein Reads ‘The Common Language of Science’ 1941 | Open Culture.

Have you ever wondered how Albert Einstein sounded? Well here you have an opportunity to find out. In the link above there is a recording of Einstein reading an essay (in English) called "The Common Language of Science".


Read me...

Copernicus celebrated - 540th Birthday Anniversary

I was planning to post this yesterday, but for one thing or another I forgot… Anyway, yesterday Nicolaus Copernicus would have been celebrating his 540th birthday. Copernicus is well known for Heliocentrism, i.e. the idea that the Earth orbits the Sun. At the time he proposed his idea without the aid of any equipment and he was (of course) branded as a heretic along the way. It was not until Galileo used his new telescope that the idea was proved right… The acceptance of which would take longer, and in the meantime Galileo would as well be called a heretic too…

I was therefore quite pleased to see the doodle that Google used yesterday to commemorate Copernicus. The doodle shows the planets of the Solar system orbiting their parent star. Happy birthday Copernicus.

Copernicus doodle



Read me...

Elliptical Answers to Why Winter Mornings Are So Long - Rebloged from NYTimes.com

Reblogged from Elliptical Answers to Why Winter Mornings Are So Long - NYTimes.com by John O'Neil.

As the parent of teenage boys who have to be dragged out of bed on school days, I had been looking forward to earlier sunrises once the winter solstice was past. But early January mornings seemed darker than ever while at the same time, the sky was clearly lighter around 5 p.m.

Tony Cenicola/The New York Times

FIGURE 8 An analemma shows the Sun’s varying positions over a year.

It turned out that what I suspected was actually true — by Jan. 2, there were 12 more minutes of sunlight in the afternoons, but 3 fewer minutes in the morning. It also turned out that the reasons for it were complicated, as I discovered in a series of phone and e-mail conversations with Jay M. Pasachoff, a professor of astronomy at Williams College, and a former student of his, Joseph Gangestad, who received his Ph.D. in orbital mechanics from Purdue.

They pointed me to the Equation of Time, a grandly named formula relating to the fact that not all days are 24 hours, if you track noon by the position of the Sun instead of on a clock.

We’ve all seen a readout of the Equation of Time, Dr. Pasachoff said. It’s that uneven figure 8 that can be found on globes in a deserted part of the Pacific, a shape known as an analemma.

If Earth’s axis were perpendicular to its orbit instead of tilted, and if its orbit were a circle instead of an ellipse, the Sun would appear in the same spot in the sky each day and clocks and sundials would always match. Instead, they can be as much as 16 minutes apart, and that’s where things get complicated.

As Earth moves toward winter solstice, you have “different things going on at the same time,” Dr. Pasachoff said.

Earth’s tilt means that every day during the fall, the angle at which we view the Sun changes. It appears farther south and travels a shorter arc across the sky, affecting sunrise and sunset equally, and making the day shorter.

The changes in the solar time follow a different cycle. In the early 1600s, Kepler discovered that planets move faster at the part of their orbit that is closest to the sun, the perihelion. For Earth, perihelion comes a little after the winter solstice, so from November on, Earth is accelerating.

That increased speed means we reach the Sun’s maximum a little earlier each day, which pushes solar noon backward against clock time. That shift is amplified because the Sun is traveling a little south each day, while clocks only count its east to west traverse.

Add it all together and you get sunrise and sunset times that are not symmetrical. In the weeks before the winter solstice, sunrise is being pushed later by both the changing angle of the Sun and the slowing of solar time. But sunset is being pushed in both directions — earlier by the Sun’s angle and later by the change in solar time.

The result is more darkness in the morning and less in the afternoon. That’s why the earliest sunset of 2012, at 4:29 p.m., in New York fell as soon as Nov. 30, according to theNational Oceanic and Atmospheric Administration’s solar calculator, while mornings continued to stay dark later. After the solstice, Earth continued its acceleration until reaching perihelion on Jan. 2. So the sunrise continued to slide, reaching its latest point, 7:20 a.m., on Dec. 28. There it stood until Jan. 11, when we finally got another minute of morning light. By Feb. 7, sunrise will be all the way back to 7 a.m.

“It’s hard to wrap the mind around this problem, which is really a figment of our timekeeping system,” Dr. Gangestad said. That is, we would never notice it if we all just used sundials.

Read me...

The Swanson Effect

Solar energy currently provides only a quarter of a percent of the planet’s electricity supply, but the industry is growing at staggering speed. Underlying this growth is a phenomenon that solar’s supporters call Swanson’s law, in imitation of Moore’s law of transistor cost. Moore’s law suggests that the size of transistors (and also their cost) halves every 18 months or so. Swanson’s law, named after Richard Swanson, the founder of SunPower, a big American solar-cell manufacturer, suggests that the cost of the photovoltaic cells needed to generate solar power falls by 20% with each doubling of global manufacturing capacity. The upshot is that the modules used to make solar-power plants now cost less than a dollar per watt of capacity. This means that in sunny regions such as California, photovoltaic power could already compete without subsidy with the more expensive parts of the traditional power market. Moreover, technological developments that have been proved in the laboratory but have not yet moved into the factory mean Swanson’s law still has many years to run.

See full article in the Economist.


Swanson Effect


Read me...

Mirror mirror, right and left, up and down

You are getting ready for the New Year's party and cannot help to use a mirror to check that all is spot on. The tie is straight, the hair is tamed, the shoes are polished but wait... right is wrong, or rather right is left and left is right... but up is still up and down is down. Why, you ask, do mirrors reverse right and left but not up and down? Well, the answer is that they do not do either of them. They reverse front to back...

Madame Jeantaud in the Mirror

The image that you see in front of you has not been swapped, but inverted along the axis of the mirror. So the answer to this question can be understood with looking at how light gets reflected. If we consider a light source, its rays will bounce off various parts of your body, they will reflect off the mirror and will be caught by your eyes; plus we know consider that for all intents and purposes light travels in a straight line. And so, a mirror (not a fun fair mirror by the way) will simply reflect what is in front of it: the light bouncing of your right hand  will hit the mirror straight on and then will bounce into your eye. What you will perceive is that you see your right hand in the place of the left one. And notice that it is a matter of perception... Now, try the following: position yourself looking North and place the mirror in front of you. Now point at something East with your right hand, you will see that the hand in the mirror will also point East; the same happens if you point West with your left hand. So the directions are fine: East is East and West is West. But look at your nose, it points North, right? What about the nose in the image? Well, it points South! The image is reverted front to back.

Richard Feynman provides this explanation to the BBC TV Series "Fun to Imagine" in 1983.

Now, you have something to think about next time you are getting ready in front of the mirror.


Read me...

Parthenogenesis - Sci-advent...



I know that strictly speaking there should not be an entry for December 25th in the Sci-advent, but to tell you the truth I could not help myself and decided to do one more. This time it is about parthenogenesis: Parthenos  (παρθένος), meaning virgin in Greek and Genesis (γένεσις), meaning birth. The name Parthenos appears for instance in Greek mythology in the story of the daughter of Apollo and Chrysothemis , who died a maiden and was placed among the stars as the constellation of Virgo (fittingly enough...).

Almost all animal species reproduce sexually, by mixing the genes of two different individuals from meiosis. About 1% of animal species reproduce by parthenogenesis, while an even smaller fraction switch between sexual and asexual reproduction (known as cyclical parthenogenesis). One method of parthenogenesis involves sex cell division and recombination, while another just produces an egg with a full complement of DNA. Parthenogenesis is known to happen in some species of fish, amphibians and reptiles... but not in humans...



Read me...

Chromosomes - Sci-advent - Day 23

ChromosomeAll known living organisms have their genetic information encoded in a molecule called deoxyribonucleic acid or DNA. Genetic information is encoded as a sequence of four nucleotides: guanine (G), adenine (A), thymine (T), and cytosine (C) recorded using the letters G, A, T, and C.  DNA molecules are double-stranded helices that strands run in opposite directions to each other

If we were to extended DNA molecules, they would be very long, however DNA is instead coiled and packaged in structures called chromosomes, which in turn are contained in the nucleus of the cell. Different species have different numbers of chromosomes (humans have 46 chromosomes, or 23 sets of chromosome pairs; peas have 14 chromosomes or 7 pairs; and tomatoes 24 chromosomes or 12 pairs). In sexual reproduction, one chromosome in each pair is contributed by each parent.
Each chromosome has a narrowing point called centromere, which divides the chromosome into two sections, or “arms.” The short arm of the chromosome is labeled the “p arm.” The long arm of the chromosome is labeled the “q arm.” The location of the centromere on each chromosome gives the chromosome its characteristic shape, and can be used to help describe the location of specific genes.


Read me...

Element 22: Titanium - Sci-advent - Day 22

Titanium CrystalElement 22 was named after the Titans - sons of the Earth - in Greek mythology. Titanium was discovered by William Gregor in 1791 in Cornwall, England and it is the ninth most abundant element in the Earth's crust. It is found in minerals such as rutile, ilmenite and sphene. Pure titanium was first produced in 1910 by Matthew A. Hunter.

Titanium is a strong light metal: to get in idea, it is as strong as steel, but 45% lighter. It is resistant to corrosion and does not react with the human body; it is paramagnetic and has a low electrical and thermal conductivity. Due to its characteristics it is used in a number of components that are exposed to sea water. In alloys it is used in airplanes and rockets, and in implants such as artificial hips, pins and other biological implants. Titanium oxide (TiO2) is used as a pigment to create white paint and accounts for the largest use of the element. Titanium tetrachloride (TiCl4), another titanium compound, has been used to make smoke screens. Pure titanium oxide is relatively clear and is used to create titania, an artificial gemstone. Powdered titanium is used in pyrotechnics as a source of bright-burning particles.

Read me...

Photoelectric Effect - Sci-advent - Day 21

photoelectric effectWe have seen how light could be described in terms of a wave, as demonstrated by the double-slit experiment. Nonetheless, that is not the whole story. For instance, in 1888, Wilhelm Hallwachs describes an experiment using a circular zinc plate mounted on an insulating stand and attached by a wire to a gold leaf electroscope, which was then charged negatively. The electroscope lost its charge very slowly. However, if the zinc plate was exposed to ultraviolet light, charge leaked away quickly. The leakage did not occur if the plate was positively charged.

By 1899, J. J.Thomson established that the ultraviolet light caused electrons to be emitted, the same particles found in cathode rays: atoms in the cathode contained electrons, which were shaken and caused to vibrate by the oscillating electric field of the incident radiation. In 1902, Philipp Lenard described how the energy of the emitted photoelectrons varied with the intensity of the light: doubling the light intensity doubled the number of electrons emitted, but did not affect the energies of the emitted electrons. The more powerful oscillating field ejected more electrons, but the maximum individual energy of the ejected electrons was the same as for the weaker field.

In 1905 Einstein gave proposed a way to explain these observations: He assumed that the incoming radiation should be thought of as quanta of frequency hf, with f  the frequency. In photoemission, one such quantum is absorbed by one electron. If the electron is some distance into the material of the cathode, some energy will be lost as it moves towards the surface. There will always be some electrostatic cost as the electron leaves the surface, this is usually called the work function, W. The most energetic electrons emitted will be those very close to the surface, and they will leave the cathode with kinetic energy. This explanation was successful and validates the interpretation of the behaviour of light as particles. In 1921, Einstein was awarded the Nobel Prize in Physics  "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect".

One very prominent application of the photoelectric effect is solar energy produced by photovoltaic cells. These are made of semi-conducting material which produce electricity when exposed to sunlight.



Read me...

Mayan Numeral System and Calendar - Sci-advent - Day 20

Numeros Maya

The Mayas are one of greatest human civilisations. Not only did they have excellent agriculture, pottery and hieroglyph writing, but also have some of the most impressive architecture and symbolic art as well as mathematics, astronomy and calendar-making. It is said that they had predicted the "end of the world", but I would like to think of it as the end and beginning of a calendar cycle. Not so different from the arbitrary December 31st in our calendars...

In order to understand the Mayan calendar cycle, we need to know a bit about their number system, which is a vigesimal system, i.e. base on the number 20. They used three basic number symbols, a shell for zero, a dot for 1 and a line for 5.  Also of note is that they were one of the earliest civilizations anywhere in the world to have the concept of zero. The system is pseudo-positional; in a true positional vigesimal system, the number that appears first would denote the number of units up to 19, the next would denote the number of 20s up to 19, the next the number of 400's up to 19, etc. In the Mayan system the numbering starts in that way with the units up to 19 and the 20s up to 19, but it changes in the third place and this denotes the number of 360's up to 19. After this the system reverts to multiples of 20 so the fourth place is the number of 18 × 202, the next the number of 18 × 203 and so on. For example [ 8;14;3;1;12 ] represents

12 + 1 × 20 + 3 × 18 × 20 + 14 × 18 × 202 + 8 × 18 × 203 = 1253912.

As a second example [ 9;8;9;13;0 ] represents

0 + 13 × 20 + 9 × 18 × 20 + 8 × 18 × 202 + 9 × 18 × 203 =1357100.

Now, to the calendar: the calendar was truly behind the number system and vice versa. They had two calendars: Tzolkin with 260 days, with 13 months of 20 days each, and the Haab with 365 days, with 18 months  of 20 days each and a shorter month of 5 days (called Wayeb). The Tzolkin was a ritual calendar, while the Haab was a civil one and the Wayeb was considered "unlucky". With these two calendars, it is possible to see when they would return to the same cycle:  the least common multiple of 260 and 365 is 18980 days; equivalent to 52 civil years or 73 ritual years. Astronomy also played an important role for instance, Mayan astronomers calculated Venus' synodic period (after which it has returned to the same position) to be 584 days. In two 52 year cycles, Venus would have made 65 revolutions and be back to the same position.

A part from those calendars, the Mayas had another way of measuring time using an absolute scale base on a "creation date and time" often taken to be 12 August 3113 BC (but of course that is a matter of debate). This date can be taken as t the zero of the so-called "Long Count". The Long Count is based on a count of 360 days represented in the Mayan number system. Let us have a look at an example: [9; 8; 9; 13; 0] is the completion date on a building in Palenque in Tabasco, Mexico. It translates to

0 + 13 × 20 + 9 × 18 × 20 + 8 × 18 × 202 + 9 × 18 × 203

which is 1357100 days from the creation date of 12 August 3113 BC so the building was completed in 603 AD.

The Long Count was divided as follows:

  • 1K'in = 1 Day
  • 1 Winal = 20 K'in
  • 1 Tun = 18 Winal = 360 K'in
  • 1 K'atun = 20 Tun = 7200 K'in
  • 1 Baktun = 20 K'atun = 144,000 K'in

On December 21, 2012, the 14th Baktun starts with the representation [] and of course the 13th Baktun finishes... but certainly not the world!



Read me...

Double Slit Experiment - Sci-advent - Day 19

Double Slit Experiment


The double-slit experiment is one of the most famous experiments in physics and one with great implications in our understanding of Nature. Although the experiment was realised originally with light, it can be done with any other type of wave.

Thomas Young conducted the experiment in the early 1800s. The aim was to allow light to pass through a pair of slits in an opaque screen. Each slit, diffracts the light and thus each acts as an individual light source. When a single slit was open, the light hit a screen with a maximum intensity in the centre and fading away from it. But when there are two slits then the light produces an interference pattern in the screen - a result that would not be expected if light consisted strictly of particles. Although the experiment favours the wave-like description of light, that is not the whole story. This interpretation is at odds with phenomena where light can behave as it is composed of discrete particles, such as the photoelectric effect. Light exhibits properties of both waves and particles, giving rise to the concept of wave-particle duality used in quantum mechanics.

Read me...

Total Internal Reflection - Sci-advent - Day 18

Total Internal Reflection

We are well acquainted with some optical phenomena such as reflection an refraction; simply take a look at an object half-submerged in a glass of water. But light has other (many other) trick under its sleeve. One very useful trick is total internal reflection. As the name suggests, this phenomenon happens when a ray of light incides in a medium boundary at a very particular angle (known as the critical angle) with respect to the normal to the surface. If the refractive index is lower on the other side of the boundary the light cannot pass through and instead it is all reflected, as if it had hit a perfect mirror.
Total internal reflection is widely used I the operation of optical fibres and devices such as endoscopes and in telecommunications, rain sensors in cars and some multi-touch displays.

Read me...

Saturn's Hexagonal Storm - Sci-advent - Day 17

Mini Saturn HexagonSaturn is well-know by its rings and it cannot be denied that they are a feature that makes of this planet an intriguing world. However, in the 1980s NASA’s Voyager 1 and 2 observed a bizarre, but symmetrically interesting feature in the north pole of Saturn: a hexagonal shaped storm. More recently, NASA's Cassini has been able to image Saturn hexagon in greater detail. The hexagon is 25,000 km (15,000 miles) across. In fact, you could nearly fit four Earth-sized planets there.

The hexagon appears to have remained fixed with Saturn's rotation rate and axis since first glimpsed by Voyager. The actual reason for the pattern in the storm is still a matter of speculation. Kevin Baines, atmospheric expert and member of Cassini's visual and infrared mapping spectrometer team at NASA's Jet Propulsion Laboratory is quoted saying: "Once we understand its dynamical nature, this long-lived, deep-seated polar hexagon may give us a clue to the true rotation rate of the deep atmosphere and perhaps the interior.

Read me...

The most secret of messages... cracked!!

A model of the GCHQ headquarters in Cheltenham
A model of the GCHQ headquarters in Cheltenham (Photo credit: Wikipedia)


In a past post I mentioned the serendipitous discovery of an encrypted message attached to the leg of a pigeon. The message, from WWII, had eluded the experts at GCHQ and the contents of the message were therefore not known. Well, it seems that a Canadian citizen has managed to do the impossible and cracked the code. His name is Gord Young, and he has been quoted saying that it took him 17 minutes to decipher the code. How did he do it? Well, it seems that he was able to do it with the help of a code book inherited.


So what is the content of the most secret of messages? Mr Young says the note uses a simple World War I code to detail German troop positions in Normandy. Here are the alleged contents of the message:


  •  AOAKN - Artillery Observer At "K" Sector, Normandy
  •  HVPKD - Have Panzers Know Directions
  • FNFJW - Final Note [confirming] Found Jerry's Whereabouts
  • DJHFP - Determined Jerry's Headquarters Front Posts
  • CMPNW - Counter Measures [against] Panzers Not Working
  • PABLIZ - Panzer Attack - Blitz
  • KLDTS - Know [where] Local Dispatch Station
  • 27 / 1526 / 6 - June 27th, 1526 hours


Is this what the message say? Well, GCHQ is surely interested in talking to Mr Young about his work... What do you think?


Read me...

Transistor - Sci-advent - Day 16

Pile of TransistorsIf is said that if a cell is the building block of life, then a transistor is the building block of the digital era. Without them a lot of the gadgets, gizmos and technology we use today will simply not be there.

Transistors amplify current, for example they can be used to amplify the small output current from a logic integrated circuit to operate a high current device. A transistor can be thought of as a kind of switch used in a variety of circuits; and this is a function that is very important in computers for instance. The fact that the switch can change between on and off makes it possible to implement binary calculations. In today's complex computers there are several thousands, even millions of transistors.


Read me...

Laser - Sci-advent - Day 15

Laser Experiment Blue

Lasers have become so common that the number of applications they have do not surprise us. Nonetheless, their characteristics still captivate all of us. Laser is an acronym of Light Amplification by the Stimulated Emission of Radiation and it is indeed a very descriptive name.

A laser consists of three main elements: a gain medium, an energy source and a device to provide feedback to the system. The amplification of the electromagnetic radiation is done by gain medium. This is possible by pumping energy to the system and thus generating stimulated emission. It is very common for typical lasers to use feedback from an optical cavity, such as a pair of mirrors at each end of the gain medium.

Laser light is characterised by properties such as monochromaticity, coherence and power.


Read me...

Positron Emission Tomography - Sci-advent - Day 14


positron emmision tomographyOne may think that anti-matter features only in theoretical physics textbooks or in sci-fi devices, nonetheless it is very much in current use. Positrons are the anti-particle of electrons and their existence was proposed theoretically by Paul Dirac in 1928 and they were observed experimentally a year later. Nowadays positrons have a number of applications, including medical imaging.

Positron Emission Tomography (PET) is a three-dimensional imaging technique that works by detecting pairs of gamma rays emitted indirectly by a positron-emitting radionuclide introduced into the body. The radioactive tracer is usually injected into the subject, once inside the body it undergoes positron emission decay and it emits a positron. The positron travels in tissue for a short distance, losing kinetic energy until it is able to interact with an electron. The positron-electron interaction annihilates the pair generating gamma rays which are detected by the scanner.  Finally the images are built with the aid of computers.


Read me...

Peltier Effect - Sci-advent - Day 13


peltier effectThe Peltier effect is named after Jean Charles Athanase Peltier who discovered it by accident while investigating electricity. In the eventful experiment, Peltier joined a copper and a bismuth wires together and connected them to each other, then to a battery. When he switched the battery on, one of the junctions of the two wires got hot, while the other junction got cold.

The Peltier effect is the heat exchange that results when electricity is passed across a junction of two conductors, and is a close relative of the Seebeck effect (effectively the same phenomenon in reverse, used in thermocouples used to measure temperature), and the Thomson effect (generation of electricity along a conductor with a temperature gradient). Sparing ourselves the maths, conduction electrons have different energies in different materials, and so when they are forced to move from one conductor to another, they either gain or lose energy. This difference is either released as heat, or absorbed from the surroundings.

When two conductors are arranged in a circuit, they form a heat pump, able to move heat from one junction to the other. Unfortunately, though, it’s not always this simple, as the Peltier effect is always up against the Joule effect – the ‘frictional’ heating that results from electrons bouncing off the atoms. In most systems, this swamps the Peltier effect, and means that all that you get is a bit more heating at one junction, and a bit less heating at the other. Nonetheless, the Peltier effect has a lot of technological potential. It is very reliable, and since it has no moving parts, it rarely needs maintenance while being mobile.

Read me...

Magnetism - Sci-advent - Day 12

magnet iron filingsMaterials that respond to the application of a magnetic field are described as magnetic materials. Magnetism can be attractive (paramagnetism) or repulsive (diamagnetism). Some materials are permanent magnets, this mess that their magnetic fields are persistent and they are caused by ferromagnetism.

Magnetic phenomena are closely related to electricity: a magnetic field can be created by moving electric charges. Electromagnetic radiation, such as light, is a form of energy emitted and absorbed by charged particles. It can exhibit a wave-like behaviour as it propagated through space.

It is possible to map the magnetic field of an object by measuring the strength and direction of the field at various locations. By following the arrows drawn you end up with field lines for the field. A map of this sort can be visualised, for instance, by doing a very simple experiment involving a magnet bar and some iron filings (see image above).




Read me...

Periodic Table (by abundance) - Sci-advent - Day 11


Periodic Table Relative AbundanceA 1970 periodic table by Prof. Wm. F. Sheehan of the University of Santa Clara that claims to show the elements according to relative abundance at the Earth's surface.

Dmitri Mendeleev published a first version of the periodic table in 1869. The table was developed to illustrate periodic trends in the properties of the then-known elements, which are presented in order of increasing atomic number. This allowed Mendeleev to predict some properties of elements that were unknown at the time.

Mendeleev's periodic table has since been expanded and refined with the discovery or synthesis of further new elements.


Read me...

Zombie Spiders - Sci-advent - Day 10

A normal spider web on the left, compared to that built by a zombie spider (right).

Spiders and their webs are an excellent example of a predator, but can you enslave a spider? Well it seems that a species of wasp has mastered the art. The unsuspected spider is instructed by the parasite to leave its web behind and start building a new one with a very different architecture that will serve as a nest to nurse the larva of the wasp. The new web has a thick cover and a lower platform where a cocoon hangs. The cover protects the cocoon from rain for instance. Once the wasp hatches it then has the zombie spider as a first meal...

Read me...

Ada Lovelace – Sci-advent – Day 9

Ada Lovelace

Ada Lovelace. Painting by Margaret Sarah Carpenter (1793–1872)

Ada Augusta Byron, Countess of Lovelace, was the daughter of the poet George Gordon, Lord Byron. She studied mathematics at the University of London with Charles Babbage, whose analytical engines were the precursors of the modern computer. Today 10th of December, it would have been her 197th birthday. That is why Google created a doodle for her (see image below).

Ada Lovelace is today known as a mathematician and computer pioneer; she created the concept of an operating system. Supplementing her translation of an Italian article on Babbage's analytical engine with an encoded algorithm she published the first computer program, albeit for a machine that would not be built until more than 150 years later as a historical project.

The Ada computer language was named after her.

Lovelace Doodle

Read me...

Sir Patrick Moore - Sci-advent - Day 8

Sir Patrick MooreBritish astronomer and broadcaster Sir Patrick Moore, died aged 89

Sir Patrick Moore was an inspiration to generations of astronomers and scientists in general. He presented the BBC programme The Sky At Night for over 50 years, making him the longest-running host of the same television show ever. The first programme was on April 24th, 1957. Sir Patrick's last appearance was last Monday, December 3rd, 2012.
He wrote dozens of books on astronomy and his research was used by the US and the Russians in their space programmes.


Read me...

Total Solar Eclipse - Sci-advent - Day 7

2009 Marislands Enewetak

Total solar eclipse over the Marshall Islands in 2009. Picture by Vojtech Rusin.

A solar eclipse happens when, as seen from the Earth, the Moon passes in front of the Sun and thus blocking it either fully or partially. This can happen only at new moon, when the Sun and the Moon are in conjunction as seen from Earth.


Read me...

Mathematical Theorems - Sci-advent - Day 6


Maths Theorems Graph

Mathematical theorem network built from Walter Rudin's Principles of Mathematical Analysis.

Scientific knowledge is built by building up on hypotheses and theories, repeatedly check them against observations of the natural world and continue to refine those explanations based on new ideas and observations. In the case of mathematics, that knowledge is organised in an incredibly structured manner. Starting up with properties of natural numbers, called axioms, and slowly working our way up, reaching the real numbers, calculus, and... well beyond. To prove new theorems, mathematicians make use of old theorems, creating a network of interconnected results—a mathematical house of cards.

Andy Reagan has recently published a blog post entitled "What's the most important theorem?" where following Walter Rudin’s Principles of Mathematical Analysis, he displays them as nodes in a network.


Read me...

Large Hadron Collider - Sci-advent - Day 5



The Large Hadron Collider (LHC) is the world's largest and highest-energy particle accelerator. It was built by the European Organization for Nuclear Research (CERN). It has become a prominent facility due to the work that is being carried there to prove or disprove the existence of the Higgs boson and of the large family of new particles predicted by supersymmetric theories.

The LHC was built in collaboration with over 10,000 scientists and engineers from over 100 countries, as well as hundreds of universities and laboratories. It lies in a tunnel 27 kilometres in circumference, as deep as 175 metres (574 ft) beneath the Franco-Swiss border near Geneva, Switzerland.


Read me...

The Cave of Crystals – Sci-Advent – Day 4

Cave of Crystals Mexico

A thousand feet (304 meter) underground, the Cave of Crystals (pictures) is just one of a series of glittering caverns beneath the Desert in the Mexican state of Chihuahua near Naica (map). Much of the complex would naturally be filled with scorching water, were it not for industrial pumps that facilitate the mining of silver, zinc, lead, and other minerals in the caves.

The temperature in the cave is about 50C, and it's the virtually 100% humidity. When the cave was first discovered it was just an accident. Miners working in the Naica silver mine broke through the walls of the cavern and were astounded to discover these enormous crystals - the biggest anywhere on Earth.


Read me...

The Babbage Difference Engine - Sci-Advent - Day 3



In 1849, British inventor Charles Babbage completed designs for a difference engine, a very early mechanical computer. Due to cost and complexity the machine was never built in his lifetime and for 150 years nobody knew if the machine would have worked. In 2002, a Babbage Difference Engine based on the original plans was completed—and it actually works. The hand-cranked device has 8,000 parts, weighs 5 tons, and is 11 feet long. Two such machines now exist, one at the Science Museum in London and another at the Computer History Museum in Mountain View, California. To get a sense of the incredible intricacy of the Babbage Difference Engine, take a look at these interactive high resolution images of the Computer History Museum machine. The images, created by xRez Studio, are each composites of up to 1,350 individual photos. The studio also shot this short video of the machine in operation.


Read me...

Skylon - Sci-advent - Day 2


skylonThe image shows the flow of hot air passing through the piping in a cooler for a new engine that is able to lower the temperature of the air lower than -140C in just 1/100th of a second.

The cooler is part of a new type of spaceplane engine demonstrated bye Reaction Engines Ltd (REL), Oxfordshire. The company ran a series of tests on key elements of its Sabre propulsion system under the independent eye of the European Space Agency (Esa).

REL's idea is for an 84m-long vehicle called Skylon that would do the job of a big rocket but operate like an airliner, taking off and landing at a conventional runway. The vehicle would burn a mixture of hydrogen and oxygen but in the low atmosphere the oxygen would be taken from the air, in the same way that a jet engine breathes air.

Taking its oxygen from the air in the initial flight phase would mean Skylon could fly lighter from the outset with a higher thrust-to-weight ratio, enabling it to make a single leap to orbit, rather than using and dumping propellant stages on the ascent - as is the case with current expendable rockets. A key element is the engine's ability to manage the hot air entering its intakes at a high speed. These gases have to be cooled prior to being compressed and burnt with the onboard hydrogen.

REL's solution is a module containing arrays of extremely fine piping that can extract the heat and plunge the inrushing air to about -140C in just 1/100th of a second. Ordinarily, the moisture in the air would be expected to freeze out rapidly, covering the piping in a blanket of frost and dislocating their operation.

It is the innovative helium cooling loop with its pre-cooler heat-exchanger that REL has been validating on an experimental rig.

Read me...

Rocknest - Sci-advent - Day 1


In the tradition of Advent Calendars, I will be posting some science related entries from today up until Dec 24th... So, here's the first entry:


Panoramic View From 'Rocknest' Position of Curiosity Mars Rover
This panorama is a mosaic of images taken by the Mast Camera (Mastcam) on the NASA Mars rover Curiosity while the rover was working at a site called "Rocknest" in October and November 2012.

The center of the scene, looking eastward from Rocknest, includes the Point Lake area. After the component images for this scene were taken, Curiosity drove 83 feet (25.3 meters) on Nov. 18 from Rocknest to Point Lake. From Point Lake, the Mastcam is taking images for another detailed panoramic view of the area further east to help researchers identify candidate targets for the rover's first drilling into a rock.

The image has been white-balanced to show what the rocks and soils in it would look like if they were on Earth. The raw-color version, shows what the scene looks like on Mars to the camera.

Image Credit: NASAx/JPL-Caltech/Malin Space Science Systems

Read me...

The most secret of messages...

Franco-British carrier pigeon which makes long...
Franco-British carrier pigeon which makes long distance flights (Photo credit: National Library of Scotland)

David Martin, a retired British civil servant was cleaning the chimney of his house in Bletchingley (Surrey), 35 miles south of London, when he found the remains of a pigeon. But this was not any pigeon: it was a carrier pigeon, and its leg still had attached to it a red metallic container with an encrypted message inside. Experts from the UK Government Communications Headquarters (GCHQ) have recently given up and recognised that it is almost impossible to find out the content of that message.

They know it is a message of World War II, that the addressee was X02, code name of the Bomber Command and believe that the pigeon could have started its flight around the time of the Normandy landings. They also know that it was heading to Bletchley Park, the communications centre during the war, some 100 km north of London.

They also know other things. They know that the sender's signature, Searjeant W Stot, suggests that it was a message from the RAF. The spelling of the word "Serjeant" is crucial as the RAF used letter "j" instead of "g".

However, they have failed to know the meaning of the message. They have no idea of the way to decipher the meaning of the 24 blocks of five letters each, and which to the eyes of the layman and the expert alike are nothing more than an alphabet soup of seemingly meaningless strings of letters: Take a lok at the first line of the message: AOAKN HVPKD FNFJW YIDDC.

These types of code were used in operations such that the messages could only be read by the people who sent them and the rightful recipients.

GCHQ have said that there are two possibilities. If the code was based on a codebook designed specifically for a single operation or mission, "it is unlikely" that someday it can be deciphered. If it was used only once and the encryption is truly random, and the key was just kept by the person who sent the message and the person who would receive it, then it quite likely that the message is indecipherable.

The code is impenetrable to the current government experts and it has been suggested that the only way to gain some insight is a collaboration with experts active at the time the message was sent, i.e. the people who were at Bletchley Park during the war and are now around 90 years old.

The British Army trained 250,000 carrier pigeons to be used in their secret communications during the war. They were particularly useful during the Normandy landings because Churchill had imposed a blockade of radio communications to increase safety and avoid providing clues to the Germans. The pigeons could fly at speeds greater than 125 kilometres per hour and cover distances of over 1,500 kilometres.

Percy, as this particular pigeon has been named, was probably disoriented and lost due to bad weather or simply exhausted after crossing the English channel. Carrier pigeon enthusiasts have proposed that the government posthumously grant Percy the Dickin Medal, the highest award given to animals for their courage.

Can you help crack the code?

The pigeon message is as follows:



Read me...

Muchas gracias por sus enseñanzas Dr Leopoldo García-Colín Scherer. Todo un honor haber sido su estudiante.


Muchas gracias por sus enseñanzas Dr Leopoldo García-Colín Scherer. Todo un honor haber sido su estudiante.

Termodinamica Estadistica Garcia-Colin

Read me...

Un universo sin centro...

Finalmente un poco de tiempo para escribir algo para el blog, y en esta ocasión una gran oportunidad para contestar una pregunta que a inicio de la semana Jorge Soto, si de Moenia, me envió. En verdad un acontecimiento que en sí mismo merece una entrada en el blog, pero mejor aún cuando me da la oportunidad de escribir acerca de algo interesante. ¡Gracias Jorge!
La pregunta decía algo así como "después del Big Bang, ¿en qué lugares es más posible que haya vida, cerca o lejos del centro?".
 Al recibir la pregunta, le comenté a Jorge que la respuesta rápida sería que en principio es igualmente probable, pero que lo más interesante (para mí, al menos) en la pregunta es el hecho de que el universo no tiene centro...
De acuerdo con las las teorías estándar de la cosmología, el universo comenzó con un "Big Bang" cerca de 14 mil millones de años y se ha ido expandiendo desde entonces. Sin embargo, no existe un centro para la expansión, ya que es la misma en todas partes. El inicio se da en una singularidad, pero es importante remarcar que una singularidad no es una cosa tangible, no es un punto. Uno no puede señalar y decir "Mira, qué cosa, una singularidad". El Big Bang no es algo que ocurrió en un lugar determinado y como tal no no debe ser visto como una explosión ordinaria. El universo no se expande hacia fuera desde un centro hacia el espacio, sino que todo el universo está en expansión, por lo cual podemos decir que está haciendo lo mismo en todos lados. O tal vez, visto de otra manera, el centro está en todos lados.
En 1929 Edwin Hubble anunció que de acuerdo a sus mediciones de la velocidad de las galaxias a distintas distancias de nosotros, entre más lajanas se encuentran dichas galaxias, más rápido se alejan. Esto podría sugerir que nos encontramos en el centro del universo en expansión, pero de hecho, si el universo se expande uniformemente de acuerdo con la ley de Hubble, entonces aparecerá hacerlo desde cualquier punto de vista.
Si una noche de observación vemos una galaxia, llamémosla A, que se aleja de nosotros a 10,000 km/s, un alien de dicha galaxia verá a la Vía Láctea alejarse a la misma velocidad de 10,000 km/s en la dirección opuesta. Otra galaxia B, dos veces más lejos en la misma dirección que A, será vista por nosotros con un alejamiento a 20,000 km/s. El alien de la galaxia A marcará un alejamiento a 10,000 km/s para la galaxia B. En otras palabras, desde el punto de vista del alien en B, todo se expande fuera desde donde el/ella/eso se encuentra, de igual manera que sucede para nosotros aquí en la Tierra.
Una analogía que ha sido usada por científicos prominentes como Arthur Eddington o Fred Hoyle es la de un balón en expansión. En su libro de 1960 "La naturaleza del universo", Hoyle escribe: "Mis amigos que no son matemáticos a menudo me dicen que les resulta difícil imaginar a esta expansión. Sin acudir a una gran cantidad de matemáticas, lo mejor que puedo hacer es utilizar la analogía de un globo con un gran número de puntos marcados en su superficie. Si el globo se infla, las distancias entre los puntos aumentan en la misma forma que las distancias entre las galaxias".
Esta es una buena analogía, pero debe ser entendida apropiadamente. de lo contrario puede causar más confusión. Como el mismo Hoyle dijo: "Hay varios aspectos importantes en los que es definitivamente engañosa". Es importante tener en cuenta el espacio tridimensional que observamos en el universo, comparado con la superficie bidimensional del globo. La superficie es homogénea, sin ningún punto pueda ser elegido como el centro. El centro del globo en sí no está en la superficie, y por tanto no debe ser considerado como el centro del universo. Si resulta de  ayuda, podemos pensar en la dirección radial en el globo como el tiempo. Sin embargo es mejor no considerar en absoluto los puntos fuera de la superficie del globo como parte del universo. Por lo tanto el espacio puede ser curvo sin haber otras dimensiones fuera de éste. Al considerar esta analogía hay varias cosas que recordar:
  1.  La superficie bidimensional del globo es análogo a las 3 dimensiones del espacio.
  2. El espacio tridimensional en la que está incrustado el globo no es análogo a ningún espacio físico con dimensiones superiores.
  3. El centro del balón no corresponde con nada físico.
  4. El universo puede ser finito en tamaño y puede estar en crecimiento como la superficie de un balón en expansión, pero también podría ser infinito.
  5. Las galaxias se alejan como puntos en globo en expansión, pero las propias galaxias no se expanden debido a que están unidas por la gravedad.

The metric expansion of space. The inflationar...

Si pensáramos en el Big Bang como una explosión como cualquier otra, con un punto central, dicho centro sería el punto más caliente, con una esfera de material expandiéndose fuera del centro. Sin embargo, hasta donde entendemos, el Big Bang no fue una explosión como tal; fue más bien una explosión del espacio mismo, mas no en el espacio. Si el Big Bang fuese una explosión ordinaria en un espacio existente, sería posible observar el borde de la expansión con espacio vacío más allá. En cambio, cuando observamos vemos hacia el Big Bang mismo y detectamos un débil resplandor de fondo de los gases calientes primordiales del universo temprano. Esta "radiación del fondo cósmico de microondas" es uniforme en todas direcciones. Esto nos indica que no es materia la que se expande hacia el exterior desde un punto, sino que es el propio espacio el que se expande de manera uniforme. Y eso es profundo en sí mismo.
Es importante destacar que otras observaciones apoyan la idea de que no hay centro del universo, al menos en la medida en que las observaciones pueden alcanzar. El hecho de que el universo se expande uniformemente no descartaría la posibilidad de que haya un lugar más denso y caliente que pueda llamarse "el centro", sin embargo estudios cuidadosos de la distribución y el movimiento de las galaxias confirman que es homogéneo a las grandes escalas que podamos observar, y no hay indicios de un punto especial que podamos llamar centro.
La idea de que el universo debe ser uniforme (homogéneo e isotrópico) a escalas muy grandes se conoce como el "principio cosmológico", nombre propuesto por Arthur Milne en 1933. A pesar del descubrimiento de una rica estructura en la distribución de las galaxias, la mayoría de los cosmólogos todavía apoyan el principio cosmológico, ya sea por razones filosóficas o porque es una hipótesis bastante útil que ninguna observación ha contradicho. Sin embargo, nuestra visión del universo está limitada por la velocidad de la luz y el tiempo finito desde el Big Bang. La parte que podemos observar es muy grande, pero es probablemente muy pequeña en comparación con todo el universo. No tenemos forma de saber cuál es la forma del universo más allá del horizonte visible, y no hay manera de saber si el principio cosmológico tiene alguna validez a escalas de distancia mayores.
Una vez entendido eso, es fácil ver por que la probabilidad de la existencia de vida es igual en cualquier parte del universo.
Read me...

The Shuttle Enterprise

While visiting the city that never sleeps I finally had the chance to visit the Intrepid Sea, Air and Space Museum in New York. The main attraction for me was the prospect of seeing and being close to the Enterprise shuttle, and having a look at the Concorde.

The museum is quite big and there are plenty of things to see. The shuttle pavilion is at the very end of the aircraft carrier Enterprise and the whole visit was very exciting. The shuttle is housed in a temporary venue and I look forward to seeing the actual permanent building when it is finished. I was surprised to know the story behind the name of this shuttle itself. It seemed to be a bit of a coincidence to share its name with the famous Star Trek spaceship.

The original name was supposed to be Constitution, in honour of the USA's bicentennial. But more than 400,000 trekkies had something else in mind. The petitioned US President Gerald Ford to change the name to Enterprise after the starship captained by James T Kirk. The pavilion shows a picture taken on September 17th 1976 on the day of the shuttle Enterprise roll-out ceremony with some of the Star Trek cast members along with its creator Gene Roddenberry.

Enterprise 3 Enterprise 2 Enterprise 1

Read me...

Martian Unconformity

A few posts ago I wrote about the Great Uncomformity, and I was quite interested when I got to hear about Curiosity, the rover, photographing unconformities on Mars. As I explained in the aforementioned post, and unconformity is effectively a discontinuity in the layers of sedimentary rock or strata.

English: Artist's rendering of a Mars Explorat...
English: Artist's rendering of a Mars Exploration Rover. Français : Vue d'artiste d'un Mars Exploration Rover (litt. « rover d'exploration martienne »). (Photo credit: Wikipedia) 

The Mars rover took a picture with its 100mm telephoto and it turned out that the subject of this landscape was a geological unconformity. The picture shows sediments that seem to have been deposited at a different angle from those below them. This we have seen before on Earth deposits where the phenomenon is due to wither volcanic or tectonic activity. Images taken from orbit suggested that the lower part consisted of sediments rich in so-called hydrated minerals, i.e. formed in the presence of water (yes, that is right, water!), but the layers above lacked the minerals. At this stage we will have to wait for further investigations to take place in order to get further evidence that the two layers were laid in different environments.


Read me...

Planetary system with two suns

Tatooine's twin suns from the Star Wars saga.
Tatooine's twin suns from the Star Wars saga. (Photo credit: Wikipedia)

If you have seen Star Wars Episode IV, you probably remember that famous scene when Luke considers his options after Uncle Owen and Aunt Beru are found dead. Atmospheric music by John Williams plays in the background and at the distance we see two suns over the horizon of the planet Tatooine.

But that is certainly not the only appearance of a double-star system with planets in science fiction. What about Gallifrey, the home planet of the Time Lord, Dr Who; or Magrathea, the luxury-planet factory in Hitchhikers' Guide to the Galaxy? Indeed interesting, but not as interesting as the prospect of a real pair of stars with their own planetary system, right? Well, recently scientists have reported the discovery of two planets orbiting a binary system as spotted by the Kepler space telescope.

So, how do we call this system? Well, it has the inspired name of Kepler-47, and it is located in the constellation of Cygnus some 5,000 light-years away. One of the plants is said to be slightly larger than Uranus and has the name of Kepler-47c, while the other one is about a third of the size of our own Earth and its name is Kepler-47b. As for the stars, one is very Sun-like and the other about a third of its partner.

Kepler-47b, the inner planet, is particularly interesting as it is in the habitable zone (or Goldilocks zone) of the system, in other words, it is located in the region where it is neither too cold nor too hot for liquid water to exist on the surface of the planet and thus the possibility of life in the planet is higher. Whether that is Time Lords, planet-making factory workers or rebels, it certainly is not known...

Read me...

Neil Armstrong


Neil Armstrong will always be remembered as the first man to walk on the moon. He has died on Saturday 25th of August, weeks after heart surgery and days after his 82nd birthday.

Neil A. Armstrong, was born in Wapakoneta, Ohio, on August 5, 1930. He served as a naval aviator between 1949 and 1952. In 1955 he joined the National Advisory Committee for Aeronautics (NACA) a predecessor of the National Aeronautics and Space Administration agency (NASA).

Armstrong gained his status as an astronaut in 1962. He then was assigned as command pilot for the Gemini 8 mission. Gemini 8 was launched on March 16, 1966, and Armstrong performed the first successful docking of two vehicles in space.

As spacecraft commander for Apollo 11, the first manned lunar landing mission, Armstrong became the first man to land on the moon and the first to step on its surface.

He was Professor of Aerospace Engineering at the University of Cincinnati between 1971-1979. During the years 1982-1992, Armstrong was chairman of Computing Technologies for Aviation, Inc., Charlottesville, Va.

In an address to America’s National Press Club in 2000, Armstrong offered the following self-portrait: “I am, and ever will be, a white-socks, pocket-protector, nerdy engineer, born under the second law of thermodynamics, steeped in steam tables, in love with free-body diagrams, transformed by Laplace and propelled by compressible flow.”

Read me...

Y ¿Qué es el Bosón de Higgs?

Uno de los sueños más ambiciosos de los físicos es la descripción de todas las fuerzas físicas como un único conjunto de relaciones matemáticas, lo que se conoce comúnmente como unificación. Todos los fenómenos observados están descrito por 5 fuerzas: la gravedad, el magnetismo, la electricidad, la fuerza nuclear débil y la fuerza nuclear fuerte.

La unificación ha ocurrido en algunos casos, por ejemplo, en la década de 1860 James Clerk Maxwell demostró que el magnetismo y la electricidad son descritos por un único conjunto de ecuaciones. De ahí que se hable de cuatro fuerzas al haber logrado la unificación de la electricidad y el magnetismo en algo que se llama electromagnetismo (muy imaginativos...). Algo similar ocurrió en los años 70s cuando Abdus Salam, Sheldon Glashow y Steven Weinberg unificaron de la fuerza nuclear débil y el electromagnetismo.

Uno puede preguntarse cómo se transmiten las fuerzas y la respuesta actual de la física dice que no se transmiten directamente entre los objetos, más bien las fuerzas son descritas por intermediarios que los físicos llaman campos. Seguramente han oído hablar de el campo eléctrico y el campo magnético, ¿verdad?

En otras palabras, todas las fuerzas de la naturaleza están mediadas por campos que resultan del intercambio de partículas que el Modelo Estándar llama bosones "gauge". Por ejemplo, en el caso de la fuerza electromagnética, la interacción de partículas cargadas eléctricamente sucede gracias al fotón que es la partícula de intercambio de la fuerza electromagnética. Del mismo modo, la fuerza nuclear débil - una interacción repulsiva de corto alcance responsable por algunas formas de radiactividad - se rige por los bosones W y Z.

El corto alcance de la fuerza nuclear débil, y por tanto su debilidad, se produce porque los bosones W y Z son partículas muy masivas, a diferencia de los fotones sin masa. En 1983, los científicos en el CERN descubrieron los bosones W y Z y por lo tanto la llamada teoría electro-débil ha sido verificada convincentemente. Sin embargo, el origen de sus masas sigue siendo un misterio. La mejor explicación en este momento es el mecanismo de Higgs.

La teoría muestra una simetría entre el fotón, W y Z, sin embargo, esta simetría se rompe espontáneamente y se cree que esta separación es la responsable de la masa de los bosones W y Z. Se cree que hay un campo, llamado campo de Higgs, que es responsable de la génesis de la masa. Este campo lleva el nombre del físico escocés Peter Higgs. Ahora, hemos mencionado que cada campo tiene una partícula asociada, en el caso del campo de Higgs tenemos el bosón de Higgs. El bosón de Higgs es la única partícula, en el Modelo Estándar, que no se ha observado (o tal vez si, como se verá más adelante). Su existencia podría explicar cómo la mayoría de las partículas elementales conocidas adquieren masa, y podría explicar la diferencia entre el fotón sin masa y la bosones masivos W y Z. Con ayuda del Gran Colisionador de Hadrones se espera obtener evidencia experimental de la existencia (o no) de esta partícula.

El 4 de Julio pasado el CERN convocó a una conferencia de prensa para un anuncio importante. Resulta que se ha informado acerca del posible descubrimiento de una nueva partícula "consistente" con el bosón de Higgs. Ha sido una búsqueda de 45 años para tener una explicación de cómo la materia adquiere masa. Y la búsqueda aún no ha terminado con este anuncio: se necesita más trabajo para tener la certeza de que verdaderamente éste es el bosón de Higgs.

Peter Higgs, estuvo presente en la audiencia en el teatro de conferencias del CERN, en Ginebra, quien se apresuró a felicitar al equipo por sus logros. El bosón, como hemos mencionado, lleva su nombre y esto fue realmente un acontecimiento trascendental para él.

El equipo del CMS , en el Gran Colisionador de Hadrones (LHC por sus siglas en inglés) , informó que han visto una señal en los datos que correspondería a una partícula con un peso de 125.3 GeV, que es aproximadamente 133 veces más pesada que el protón. Si realmente se confirma, será uno de los mayores descubrimientos científicos en mi vida, pero aún más emocionante es el hecho de que esto no cierra el capítulo, puede incluso abrir otras vías de investigación y entendimiento. Y como tal, los físicos del CERN dicen que actualmente los datos que tienen son compatibles con el de bosón de Higgs del Modelo Estándar...

Related Articles

Read me...

The Higgs Boson Explained...with a Cartoon

Here you go! Enjoy!



Read me...

CERN announces a Higgs-like boson particle

An example of simulated data modelled for the ...
An example of simulated data modelled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Here, following a collision of two protons, a is produced which decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the proton-proton collision in the detector while the energy these particles deposit is shown in blue. (Photo credit: Wikipedia)

CERN has called for a press conference today (July 4th) for an important announcement. It turns out that they are reporting the claim of the discovery of a new particle "consistent" with the Higgs boson. It has been a search of 45 years for an explanation of how matter acquires mass. And the search is not yet over with this announcement: more work is needed to be certain that this is indeed the Higgs boson.

Peter Higgs was present in the audience in the conference theatre at CERN, Geneva. He was prompt to congratulate the team for their achievement. The famous boson is named after him and this was really a momentous event for him.

The CMS team, at the Large Hadron Collider (LHC), report that they have seen a signature in the data for a particle weighing 125.3 GeV, which is about 133 times heavier than the proton. If actually confirmed, this will be one of the biggest scientific discoveries in my lifetime, but even more exciting is the fact that this does not close the chapter, it may even open other avenues of research and understanding. And as such, the physicists at CERN say that, currently, the data they have is compatible with the Standard Model Higgs boson...

Read me...

Los Investigadores del Mañana

English: British Library (modern building in f...
English: British Library and St Pancras station with Euston Road on the right, London.

Parece ser que pocos estudiantes de doctorado exploran nuevas tecnologías en sus investigaciones o entienden la variedad de información disponible para ellos, de acuerdo a un reporte encargado por la British Library y JISC (un cuerpo para la tecnología en la educación superior en el Reino Unido). El reporte puede ser visto aqui (en inglés).

"Los investigadores del mañana" publicado el 28 de junio, encuestó a más de 17,000 estudiantes de doctorado (en el Reino Unido) en un período de tres años, siguientdo 60 a profundidad y en particular a los nacidos entre 1982 y 1994, la llamada Generación Y.

El reporte afirma que a pesar de ser conocedores de la tecnología, la Generación Y de estudiantes de doctorado saben muy poco sobre la variedad y la autenticidad de la información de investigación disponible en nuevos formatos, como bases de datos en línea, revistas electrónicas y depósitos, y pocos saben cómo acceder a esta información.

También tienen poca comprensión acerca del tema de acceso abierto y los derechos de autor. Muchos creen que los supervisores no aprobarían el citar documentos acceso abierto o libre y sólo el 26 por ciento saben que los donantes y fundaciones están empezando a esperar el acceso abierto a la investigación que apoyan.

Julie Carpenter, una de las co-autoras del reporte y directora de la consultora Education for Change afirma que los resultados sugieren un descuido hacia los estudiantes de doctorado, los cuales han experimentado una sensación de aislamiento.

Apoyo institucional - en términos de oferta de bibliotecas, información sobre el entorno de la investigación y de formación - no está funcionando y tiene que haber un "cambio de paradigma" en la forma en que el sector da ayuda y se compromete con los estudiantes de doctorado, dijo.

"Hay una desconexión entre las organizaciones estratégicas como JISC, [que] se han empeñado en decir que se deben utilizar estas herramientas maravillosas, promover el intercambio y mover a la investigación a la era electrónica dentro de las propias instituciones", agregó Carpenter.

La aversión al riesgo

Esto se refleja en otro de los hallazgos del estudio: que aunque los estudiantes de la Generación Y utilizan algunas herramientas en línea tales como marcadores (bookmarks) y RSS, muy pocos emplean tecnologías de colaboración como los wikis, los blogs y Twitter en sus investigaciones, a pesar de utilizar estas herramientas en su vida personal.

Debbie McVitty, representante de investigación y políticas para postgraduados en la National Union of Students (Reino Unido) y miembro del grupo asesor de estudios, atribuye en parte la aversión al riesgo a la presión sobre los estudiantes de doctorado para completar sus estudios en lugar de crear una buena investigación.

"La gente que va a adoptar [tecnologías] tempranamente son probablemente las personas, tales como profesores, que están más establecidas en su posición y pueden permitirse el lujo de ser más experimentales", dijo.

"El acceso a un trabajo académico puede ser un tanto difícil - y por tanto no se quiere correr ningún riesgo."

Junto a personal de biblioteca y administradores de universidades, los supervisores tienen que desempeñar un mejor papel en informar a los estudiantes, con apoyo de la medida de sus campos de estudio, e dijo McVitty.

El informe también encontró una "dependencia sorprendente" por los estudiantes de doctorado en las conclusiones de otras personas en lugar de las fuentes originales.

Según la encuesta, en cuatro de cada cinco casos, los estudiantes de doctorado busca los libros y documentos publicados durante su búsqueda de información para apoyar su investigación, en lugar de material "primario" como muestras, archivos y bases de datos.

Los estudiantes también deben recopilar datos y hacer investigación original además de explorar esas fuentes secundarias, comentó Carpenter, pero este hallazgo puede identificar una tendencia que, si se verifica, tendría "consecuencias muy graves".

Read me...

Happy birthday Turing

Today, a 100 years ago Alan Turing was born. As a form of celebration Google has put a functioning Turing machine as their latest doodle. A Turing machine is a device that uses a tape with symbols that are manipulated according to certain rules and as you can imagine it was proposed by Turing in 1936.

Turing machine

Read me...

Pleasant surprise...

Pleasant surprise to see the poster of the talk at Escuela Superior de Física y Matemáticas (ESFM), IPN Mexico.



ESFM Talk Escuela de Fisica y Matematicas


Read me...

Repulsive Polarons

Yes, indeed this post is about repulsive polarons, but that does not mean that they are repulsive because they cause revulsion or anything of the sort. We are talking about quasiparticles which are predicted to occur when 'impurity' fermionic particles interact repulsively with a fermionic environment. And it turns out that these quasiparticles have now been detected.

Ok, "what is a quasiparticle?" I hear you say. Well, a quasiparticleis a perturbation or disturbance in a medium, whose behaviour is that of a particle and thus for all intent and purposes can be regarded as one. Their study is important in relationship to solid-state physics, condenses matter and nuclear physics as they help us in determining the properties of matter.

Rudolf Grimm (Innsbruck) and a team of physicists have experimentally realised the observation of a repulsive polaron in an ultracold quantum gas. The results have been publised in Nature.

Varios phenomena from condensed matter physics can be experimentally simulated using ultracold quantum gases. In these system, the control that can be achieved over the many-body interactions is grater and this is always helpful.

In order to observe repulsive polarons the physicists used an ultracold quantum gas of lithium and potassium atoms and they control the atomic interactions using electromagnetic fields and RF pulses. The potassium atoms are dirven into a state where they repulse the surrounding lithium atoms. This interaction can be seen as a particle with modified properties - a quasiparticel. Once the researchers analyse the energy spectrum of the system, they were able to demonstrate repulsive polarons.

The observation of these polarons is important as it demonstrates that they can indeed be observed. In condensed matter quasiparticles decay very quickly and this poses the problemof studying them. In this experiments, the researchers say, the polarons showed an almost ten times increased lifetime compared to earlier experiments in similar systems. This opens up the possibility of having a platform for a more detailed analysis of many-body systems that rely on repulsive interactions.

Read me...

The Great Unconformity

You might think that this is related to someone sitting in the most uncomfortable position ever, or about a very awkward moment and you would be completely wrong. You would have to pay attention to the fact that there is an "n" and not an "m" in there...

The first time I heard about the Great Unconformity was a few days back in the Nature podcast. An unconformity in this context refers to the contact surface between younger and older rocks in the geological record where a discontinuity is present. So, the Great Unconformity refers to the large gaps left in the planet's rock record (ahem... nothing to do with music...) where young sedimentary rocks sit on top of much older metamorphic rock. For example, in the Grand Canyon, a layer of sandstone dating back to 500 million years ago sits on top of a 1.7-billion-year-old metamorphic rock layer. There are similar unconformities around the world.

Blacktail Canyon and The Great Unconformity - ...
Blacktail Canyon and The Great Unconformity - Grand Canyon (Photo credit: Al_HikesAZ)

Why is this so interesting you ask? Well, among other things, these gaps  leave a limited record precisely when life was advancing very quickly. 500 million years ago or so, new forms of multicellular life forms appeared, something that is come to be known as the Cambrian explosion. In the article referred to during the Nature podcast, researchers from the University of Wisconsin and Pomona College link changes in ancient ocean chemistry to this remarkable transformation of life. One important change is that of biomineralisation, by which organisms started using minerals, such as calcium carbonate, to build structures such as shells and skeletons. The formation of the Great Unconformity "may have been an environmental trigger for the evolution and mineralisation and the 'Cambrian explosion'", the researchers say.

Read me...

Uploading videos to Vimeo

Now that you have created your videos with either your PC or your Mac, you are ready to share them with the world. I find Vimeo very easy to use and quite flexible in terms of content, size of files and things of that sort. In this video I show you try quickly how to create an account and how to upload your masterpiece.

As usual, let me know what you think.


Read me...

Videocasting with a PC

Talking to some people about screen capturing and video tutorials, I came across the fact that, although there is some interest in the activity, there is the idea that you need sophisticated tools to create even the simplest video presentation.

In this video I show how some simple videos can be produced by capturing screenshots using a PC with windows installed. The tools that I use are CamStudio and Freemake Video Converter, which are readily available in the web.

As usual, any comments are more than welcome. Enjoy!


Read me...

Royal Society of Chemistry Library

 Very nice working/reading space in the Royal Society of Chemistry and in the heart of central London!

Royal Society of Chemistry

Read me...

Interview with Samuel Richards - Quantum Tunnel Podcast

rp_sam_richards1.jpgYou can download this podcast in iTunes or Feedburner.

The Quantum Tunnel Podcast brings you an interesting chat with Samuel Richards, an undergraduate student at the University of Hertfordshire who has recently had the opportunity to collaborate with researchers in the University of Sydney and the Australian Astronomical Observatory working on SAMI.


Travelling faster than light

One of the cornerstones of modern physics is the idea that nothing can travel faster than the speed of light. Nonetheless, researchers at the Gran Sasso facility in Italy have recently reported on the recording of particles travelling at speeds forbidden by the theory of relativity.

Researchers on the Oscillation Project with Emulsion-Tracking Apparatus or OPERA recorded the arrival times of neutrinos sent from CERN. The trip would take a beam of light 2.4 milliseconds to complete, but after three years of experi-ments, the scientists report on the arrival of 15,000 neutrinos sixty billionths of a second earlier. The result is so unexpected that the OPERA researchers say that they hope the physics community would scrutinise their experiment and help un-cover any flaws. The results have been reported in the ArXiV.

Good-bye Tevatron

At the end of September the Tevatron facility near Chicago fires its last particles af-ter US federal funding ran out. During its more than 25 years, the Tevatron has without a doubt left a rich legacy, for instance one of natures heaviest elementary particles, the top quark, was found here.

The Tevatron was run by the Fermi National Accelerator Laboratory or Fermilab, where since 1985 scientist have been accelerating protons and antiprotons around its 6km ring in order to unlock the secrets of the Universe. The closure of the facility is indeed a solemn occasion, at a time when budgets for science are increasingly being squeezed.

Amazon dam halted again
A Brazilian judge has suspended work on the Belo Monte hydroelectric plant in the Amazon Jungle. In previous podcasts we have reported in the on and off plans for the plant.

In a ruling posted last week, the judge, Carlos Eduardo Martins, said he halted con-struction of the dam because it would harm fishing by indigenous communities in Para State. Back in February the construction was halted by another judge, but the ruling was overturned. The Brazilian government strongly backs the project and it has reported that they will appeal the new ruling.

Read me...

Entrevista con Pável Ramírez - Quantum Tunnel Podcast en Español

Puedes descargar este podcast en iTunes o Feedburner.

An extremely shallow depth of field, a common ...
Image via Wikipedia

En esta ocasión el Quantum Tunnel Podcast les ofrece una plática que hemos tenido con Pável Ramírez quien se encuentra realizando estudios doctorales en el Imperial College en Londres en el área de óptica.  Su línea de investigación durante el doctorado está relacionada con el aumento de la profundidad de campo.

Durante la entrevista Pável recomienda ver la película El Violín, dirigida por Francisco Vargas.


Viajando más rápido que la luz
Uno de los pilares de la física moderna es la idea de que nada puede viajar más rápido que la velocidad de la luz. Sin embargo, investigadores en las instalaciones del Gran Sasso en Italia, han reportado recientemente el hallazgo de partículas que viajan a velocidades prohibidas por la teoría de la relatividad.

Los investigadores del Oscillation Project with Emulsion-Tracking Apparatus u OPERA por sus siglas en inglés registraron los tiempos de llegada de neutrinos enviados desde CERN. El viaje le tomaría a un rayo de luz 2.4 milisegundos, pero después de tres años de experimentos, los científicos informan de la llegada de 15.000 neutrinos unos sesenta billonésimas de antes. El resultado es tan inesperado que los investigadores de OPERA dicen que esperan que la comunidad de la física pueda escrutinar sus experimentos y ayudar a descubrir donde está la falla. Los resultados han sido reportados en el arXiv.

Despedida al Tevatron

A finales de septiembre la instalación Tevatron cerca de Chicago, disparó sus últimas partículas después de que se terminara el presupuesto aportado por el gobierno federal de los Estados Unidos. Durante sus más de 25 años, el Tevatron dejasin lugar a dudas, un rico patrimonio, por ejemplo, una de las partículas más pesadas de la naturaleza, el top quark,  fue hallado aquí

El Tevatron fue dirigido por el Fermi National Accelerator Laboratory o Fermilab, donde desde 1985 los científicos han estado acelerando protones y antiprotones alrededor de un anillo de seis kilómetros con el fin de descubrir los secretos del Universo. El cierre de la instalación es sin duda una ocasión solemne, en un momento en que los presupuestos para la ciencia son cada vez más reducidos.

Presa en el Amazonas se detiene de nuevo

Un juez brasileño ha suspendido las obras de la central hidroeléctrica de Belo Monte en el Amazonas. En podcasts anteriores hemos informado acerca de los cambios de planes que la planta ha sufrido.

En un fallo publicado la semana pasada, el juez, Carlos Eduardo Martins, dijo que detuvo la construcción de la presa, ya que perjudicaría la pesca de las comunidades indígenas del estado de Pará. En febrero la construcción fue interrumpida por otro juez, pero la sentencia fue revocada. El gobierno de Brasil apoya firmemente el proyecto y se ha reportado que apelarán el nuevo fallo.

Read me...

Online/Offline Communities - Science on Line London 2011

Image by AJC1 via Flickr

This year I had the great opportunity of participating in the discussions of one of the breakout session in the SOLO11 Conference. The topic of the session was the importance of offline communities in online networking. The session was organised by Eva Amsen and co-hosted by Paula Salgado and myself.

It seemed to us quite interesting the fact that people were coming together to an event about science online. Why not organise it solely as an online event? Is it because communities work better when there is support offline?

Eva started off the discussion with some examples of offline communities moving online. She talked about the Node,  a community that started as a suggestion from an existing network of developmental biologists. Other examples included the ArXiv, and Facebook. Here some of the things that Michael Nielsen mention in his opening presentation resonated with what was being discussed: these communities started as small groups, and that is why they worked.

Paula talked to us about her experience in the online and offline communities, including I'm A Scientist, Get Me Out Of Here. Incidentally, you can listen to the interview she gave me for the Quantum Tunnel Podcast about her involvement with this programme. She also mentioned Science is Vital.


As for me, I had the pleasure to talk about my involvement with the organisation of  UKSciTweetups. UK Science Tweetup or UKSciTweetup is a quasi-regular meeting of scientists and sci-curious tweeps, usually on a weekday evening at a pub. Attendees are usually people that use twitter and who are interested in scientific topics. The tweetups are organised and followed-up using a hashtag: #ukscitweetup; anyone interested in the tweetups just need bookmark and/or subscribe to a twitter search for the hash tag. Everyone is  welcome, you don't have to be a scientist, but you must be interested in science.

There has been some debate as to why UK is used in the hashtag since most of the events happen in London, where the events first started. The standard answer is that anyone in the UK can start their on chapter and I believe there have been some successful events in Bristol and Manchester but having more would be great.

In my opinion, there seems to be a general misconception that online communities are what it says in the can - simply and exclusively online. This is and should not be true. The thing to remember is that they are first and foremost communities: collections of people who share a common interest, aim or goal. The fact that they start coming together online does not preclude them from meeting offline, and by doing so they enrich their experience and can be beneficial as the ties between members can become more meaningful and has an impact in the way people use the community.

Meeting offline goes beyond the mere face-to-face interaction with other members as usually people tend to bring people who either are not in the online community (in this case twitter) or are users, but do not interact with other members.

In my experience, it has been very enriching to take part in organising some tweetups but I must admit that keeping momentum can be a hard thing to do. Having the meetings at a pub makes it easier for people to come and go (there was one organised to coincide with the late opening of the Science Museum, but it was a disaster trying to meet with people).

More recently I have not had as much time but that is not to say that other advocates are not active. It is important to mention that the aim of the events is simply to socialise with other people interested in science, so other than the hashtag there is no formal organisation and events tend to happen quite organically.

Having online communities is nothing new, they seem to appear and disappear like fairy lights (MySpace anyone? Google+?). The inherent connectivity provided by the web offers a very convenient way for people to meet others with common interests, or to seek out people to help them with problems or issues they face. However, there are many limitations to this end in terms of building a strong community. Meeting offline con address some of this issues. Online interactions are relatively easy to establish, but they tend to be transient - members don’t log back in or move to the latest networking tool. In that sense it becomes easier when the virtual space provided becomes a bit more tangible.

Going offline:

So why go offline? Being behind the computer screen provides with a certain sense of safety but there are benefits in going offline. First and foremost meeting people we chat with online makes them real. The anonymity of the internet provides a the ease of starting a relationship but there is nothing like a handshake to consolidates it. Spending some time actually chatting in a conversation down the pub for example, rather than reading each individual utterance in your twitter timeline, allows for what I would call true bonding. Participants leave feeling that they have truly connected with peers - for instance by learning finer details about them than an online discussion permits.

Having eye contact when someone and being able to read their body-language makes a huge difference - and can increase or decrease the interaction with that person. Given that members presumably have interacted online in the past makes it much easier than meeting complete strangers and things flow much quicker.

If I were asked about my top tips to build an online-offline community I would have to include:

  1. Define a purpose or a cause the group cares about: In the case of UKSciTweetup is science, in a very general definition of the word. The group includes a bunch of physicists, astronomers, mathematicians, biologists, chemists, and most importantly their friends (as we like to put it).
  2. Build conversation: in the case of UKScitTweetup engaging with the community happens naturally (via twitter) and using the same logic of being free to follow/unfollow people. UKSciTweetup is open to anyone that engages in the conversation and turn up at the pub. This opens up the doors to the members to feel that they have an opportunity to be involved in the overall running of the events and this therefore translates into a more cohesive community.
  3. Building momentum: Momentum is a huge factor and keeping it going can be a hard thing to do. Once you get some steam, things flow much better and people get more involved. Nonetheless, this is easier said than done. Creating events and meetups for the online community is a great way to keep things going.
  4. Give people the opportunity to volunteer: If people feel like they can contribute and are keen to participate, the benefit is for the community. Things can be as simple as making recommendations, organise parts of meetups or simply disseminate information. (Anyone interested in organising the next event BTW?).

It is obvious that we are now in an era of online culture. However, that does not mean that we cannot build or leverage an offline community to help the online one or vice versa. It might sound a bit confusing, but there are common features in both and these should be exploited to benefit the community.

Read me...

So, what is the Higgs boson?

A diagram summarizing the tree-level interacti...
Image via Wikipedia

One of the most ambitious dreams of physicists is the description of all physical forces as a single set of mathematical relations; this is commonly referred to as unification. All observed phenomena are said to be described by 5 forces: gravity, magnetism, electricity, weak force and strong force.

Unification has happened in some cases, for example in the 1860's James Clerk Maxwell showed that magnetism and electricity are described by a single set of equations. This is why some people talk about four forces after having unified electricity and magnetism into something called electromagnetism. Something similar happened in the 70's when Abdus Salam, Sheldon Glashow and Steven Weinberg unified the weak nuclear force and electromagnetism .

One can ask how forces are transmitted and the current answer from physics would say that they are not transmitted directly between objects, instead the description indicates that forces are described by intermediaries called fields. In other words, all forces in nature are mediated by fields which result from the exchange of particles that the Standard Model called gauge bosons. For instance, in the case of  the electromagnetic force, the interaction of electrically charged particles happens thanks to the photon which is the exchange particle for this force. Similarly, for the weak force - a repulsive short-range interaction responsible for some forms of radioactivity is governed by the W and Z bosons.

The short-range of the weak force, and its weakness comes about because the W and Z bosons are very massive particles, unlike the massless photon. In 1983, scientists in CERN discovered the W and Z bosons and thus the so-called electro-weak theory has convincingly been verified. However, the origin of their masses is still a mystery. The best explanation at the moment is the Higgs mechanism.

The theory shows a symmetry between the photon, W and Z; nonetheless, this symmetry is spontaneously broken and it is thought that this breaking is responsible for the mass of the W and Z bosons. It is thought that there is a field, called the Higgs field, which is responsible for the genesis of mass. This field is named after the Scottish physicist Peter Higgs. Now, we mentioned above that every field has an associated particle, in the case of the Higgs field we have the Higgs boson. The Higgs boson is the only particle, from the Standard Model, that has not been observed. Its existence would explain how most of the known elementary particles become massive, and would explain the difference between the massless photon and the massive W and Z bosons. The Large Hadron Collider is expected to provide experimental evidence of the existence (or not) of this particle.




Read me...

I'm not a scientist, but I had a go... Student during work experience


medicine-2007-visualisationDuring the last few days Daniel Zheng was visiting me and had a chance at working on some problems using graph theory and networks. Here is what he has to say about this week…

I’m now coming to the end of my placement, having finished writing the (surprisingly complicated) Octave/Gnuplot script to plot a graph of collaboration networks for Medicine during the year 2007. I’ve definitely learned a few things, such as not to be afraid of command-line software, basic operations in Octave and MATLAB® and that it is much more satisfying creating a graphic diagram completely from scratch, especially when it involves hours of typing repeated commands. Computers are very interesting when you can interact with their underlying, fundamental workings, and I can now see how lucky we are today to have beautifully polished operating systems that don’t spit out pages of error messages when you forget that the file name begins with a capital.

I’ve actually really enjoyed the last few days, and I think it’s given me a taste of what university maths & physics might be like; hopefully that’s what I’ll be doing for four years so its nice to be sure I’ll like it! Learning these sorts of computer skills is also likely to strengthen my application for those exact courses, and I do feel like I’ve stretched the boundaries of my own knowledge (if not, as correctly predicted, that of the wider scientific community). Most of all, though, I’m incredibly grateful to Dr Rogel-Salazar for giving up his time and his office space to teach me all of this, for troubleshooting my computer when things went wrong, and (of course) for getting me free food at the faculty barbeque. It’s been a very intriguing and different experience to what I’m used to at school, and hopefully he’ll continue to provide this great opportunity for others like me; anyone who can, should definitely give it a go.

Anyway that’s enough from me, so I’ll be off now…

Read me...

I'm not a scientist, but let me have a go... Student during work experience

Graph, created in Neato
Image via Wikipedia

This week, yet another enthusiastic student is doing a bit of work experience with me. This time it is about graph theory, network analysis and their applications. You never know, he might even help me overcome the deafening silence of the Quantum Tunnel Podcast!

My name is Daniel Zheng and I am a sixth form student at Camden School for Girls. I am about to begin studying for my A2s, and by the end of next year I should have full A-levels in Maths, Further Maths, Physics and Chemistry, and an AS in English Literature. I am hoping to study Maths and Physics at university, and would like to have a career in some sort of science-based industry or field. As well as a keen interest in science and maths, in my spare time I play the French Horn, go rock climbing, play squash and as much more as I can fit in!

Having read many books and articles about scientific progress and advancement, I have always wondered what it’s actually like to work in an active research centre. This work experience is a very good opportunity for me to do that (even if my tasks aren’t likely to change the course of science as we know it…) and get a feeling of how universities operate. I already feel like I’ve learnt a few things about the diverse and interlinked nature of supposedly ‘separate’ fields, and hopefully there will be much more to find out…

Read me...

Leonhard Euler - Quantum Tunnel Podcast

Leonhard Euler (1707–83), one of the most prom...
Image via Wikipedia

You can download this podcast in iTunes or Feedburner.

Leonhard Euler (1707-1783) was Switzerland’s foremost scientist and one of the three greatest mathematicians of modern times (the other two being Gauss and

Euler was a native of Basel and a student of Johann Bernoulli at the University, but he soon outstripped his teacher. His working life was spent as a member of the Academies of Science at Berlin and St. Petersburg. He was a man of broad culture, well versed in the classical languages and literatures (he knew the Aeneid by heart), many modern languages, physiology, medicine, botany, geography, and the entire body of physical science as it was known in his time.  His personal life was as placid and uneventful as is possible for a man with 13 children.

Though he was not himself a teacher, Euler has had a deeper influence on the teaching of mathematics than any other man. This came about chiefly through his three great treatises: Introductio in Analysin Infinitorum (1748); Institutiones Calculi Differentialis (1755); and Institutiones Calculi Integralis (1768-1794). There is considerable truth in the old saying that all elementary and advanced calculus textbooks since 1748 are essentially copies of Euler or copies of copies of Euler.

He extended and perfected plane and solid analytic geometry, introduced the analytic approach to trigonometry, and was responsible for the modern treatment of the functions $latex log x$ and $latex e^x$. He created a consistent theory of logarithms of negative and imaginary numbers, and discovered that $latex log x$ has an infinite number of values. It was through his work that the symbols $latex e$, $latex pi$, and $latex i$ became common currency for all mathematicians, and it was he who linked them together in the astonishing relation $latex e^{pi i} + 1 = 0$. This is a special case of his famous formula $latex exp(itheta) = cos theta + i sin theta$, which connects the exponential and trigonometric functions. Among his other contributions to standard mathematical notations were $latex sin x, cos x$, the use of $latex f(x)$ for an unspecified function, and the use of $latex Sigma $ for summation. He was the first and greatest master of infinite series, infinite products and continued fractions, and his works are crammed with striking discoveries in these fields.

He contributed many important ideas to differential equations: the various methods of reduction of order, the notion of an integrating factor (often called an Euler multiplier), substantial parts of the theory of second order linear equations, power series solutions – all these are due to Euler. In addition he gave the first systematic discussion of the calculus of variations (founded on his basic differential equation for a minimizing curve), discovered the Eulerian integrals defining the gamma and beta functions, and introduced the Euler constant:

$latex gamma = lim_{nrightarrow infty}(1+frac{1}{2} +frac{1}{3}+...frac{1}{n}) = 0.5772...$

which is the most important special number in mathematics after $latex pi$ and $latex e$. He also worked with Fourier series, encountered the Bessel functions in his study of the vibrations of a stretched circular membrane, and applied Laplace transforms to solve differential equations - all before Fourier, Bessel, and Laplace were born. The origins of topology - one of the dominant forces in modern mathematics - lie in his solution of the Königsberg bridge problem and his formula $latex V - E + F = 2$ connecting the numbers of vertices, edges, and faces of a simple polyhedron.

In number theory, he gave the first published proofs of both Fermat's theorem and Fermat's two squares theorem. He later generalized the first of these classic results by introducing the Euler $latex phi$ function; his proof of the second cost him 7 years of intermittent effort. In addition, he proved that every positive integer is a sum of four squares, investigated the law of quadratic reciprocity, and initiated the theory of partitions, which deals with such problems as that of determining the number of ways in which a given positive integer can be expressed as a sum of positive integers. Some of his most interesting work was connected with the sequence of prime numbers, with those integers $latex p>1$ those only positive divisors are 1 and $latex p$. His used the divergence of harmonic series $latex 1+frac{1}{2}+frac{1}{3}+...$ to prove Euclid’s theorem that there are infinitely many primes.

The distinction between pure and applied mathematics did not exist in Euler’s day, and for him the physical universe was a convenient object that offered scope for methods of analysis. The foundations of classical mechanics had been laid down by Newton, but Euler was the principal architect. In his treatise of 1736 he was the first to explicitly introduce the concept of a mass-point or particle, and he was also the first to study the acceleration of a particle moving along any curve and to use the notion of a vector in connection with velocity and acceleration. His continued successes in mathematical physics were so numerous, and his influence was so pervasive, that most of his discoveries are not credited to him at all and are taken for granted by physicists as part of the natural order of things.

However, we do have Euler's equation ns of motion for the rotation 'of a rigid body, Euler's hydrodynamical equation for the flow of an ideal incompressible fluid, Euler's law for the bending of elastic beams, and Euler's critical load in the theory of the buckling of columns. On several occasions the thread of his scientific thought led him to ideas his contemporaries were not ready to assimilate. For example, he foresaw the phenomenon of radiation pressure, which is crucial for the modern theory of the stability of stars, more than a century before Maxwell rediscovered it in his own work in electromagnetism.

Euler was the Shakespeare of mathematics - universal, richly detailed, and inexhaustible.


Bilingualism key to language survival
There are about 6000 different languages in the world, but just a handful, including English, dominate. Some mathematical models have shown how dominating languages can lead to the decline and extinction of less popular languages. However. Physicists in Span are challenging this idea. According to Jorge Mira Pérez and his colleagues at the University of Santiago de Compostela earlier models have not taken into account bilingualism which allows both languages to co-exist and evolve.
The researchers compared the results of their model to historical data for the preponderance of Spanish and Galician from the 19th century to 1975 and found that the fit is quite good. They find that both languages can survive so long each is initially spoken by enough people and both are sufficiently similar. The paper was published in the New Journal of Physics.

Periodic Table of Shapes
We are very familiar with the periodic table of elements, whose invention is attributed to Dimitri Mendeleev in 1869 and it has become ubiquitous in many a classroom. The table is a visual representation of the periodic law which states that certain properties of the elements repeat periodically when arranged by atomic number. Researchers at Imperial College London are interested in creating a periodic table of shapes which would become a very useful resource for mathematicians and theoretical physicists looking for shapes in three, four and five dimensions that cannot be broken into simpler shapes. These basic blocks are known as “Fano variaties” and for them to represent practical solutions to physical problems, researchers need to look at slices of the Fano varieties known as Calabi-Yau 3-folds which give possible shapes of the curled extra dimensions required by string theory.

Enlarging Schrödinger’s cat
Quantum mechanics tell us that a quantum object can exist in two or more states simultaneously, this is called a quantum superposition and usually it can be seen in very tiny objects. Nonetheless researchers in Austria have recently demonstrated quantum superposition in molecules composed of up to 430 atoms each.
Erwin Schrödinger proposed a thought experiment to illustrate the apparent paradoxes of quantum theory in which a cat would ne poisoned or not depending on the state of a quantum object. Since the object could be in a superposition of states, the cat would thus be dead and alive at the same time. This highlights the difference between the classical and the quantum worlds and poses the question as to how big would the objects have to be in order to perceive their quantumness.
Markus Arndt and colleagues have shown the observation of quantum effects in large molecules tailor-made for the purpose – up to 6 nanometres across and with up to 430 atoms, several times larger than molecules used in similar experiments in the past.

Female hormone holds key to male contraceptive
Contraceptive pills have been in the market for 50 years not, but are only available for women. Scientists had known that high doses of certain hormones stopped ovulation, but extracting the quantities needed for scale production was too difficult. It was not until invention of progestine by Mexican chemist Luis Miramontes and co-workers that lead to the creation of oral contraceptives.

Recently, two studies published in Nature (1, 2)  point to a breakthrough to design a new class of contraceptive pills. Researchers have shown how sperm sense progesterone, a female sex hormone, which serves as a guide to the egg. Progesterone activates a molecular channel called CatSper, which floods sperm cells with calcium. Problems with progesterone sensing could explain cases of infertility. The results could pave the route to coming up with a male contraceptive pill in the future.

Read me...

Ig Nobel Awards Tour

Last Thursday, 17th March, I celebrated St Patrick's day by attending an event at Imperial College London: the Ig Nobel Awards tour.

The show was presented by Marc Abrahams, organiser of the Ig Nobel prizes, editor of the Annals of Improbable Research. It featured some Ig Nobel Prize winners and other 'improbable' researchers.

Matija Strlic, from UCL, talked about “the Smell of Old Books“.

Elena Bodnar, 2009 Ig Nobel Prize winner in public health, presented her emergency brassiere, which can be quickly converted into a pair of protective face masks, one for the brassiere wearer and one to be given to some needy bystander. She demonstrated this invention and the idea was a also to introduce a device worn by males. However the "prototype" disappeared and a bit of improvisation had to be done...

Dan Bebber, one of the winners of the 2010 Ig Nobel Prize for Transportation, talked about using slime mould to model an effective railway network. In the experiment, cities were represented by porridge oats that were linked to one another as the slime mould grew.

John Hoyland, editor of the “Feedback” column in New Scientist Magazine talked about some interesting oddities.

An enjoyable evening full of geekiness!

ignobel_imperial_09-scaled-10002 ignobel_imperial_08-scaled-10001 ignobel_imperial_07-scaled-10002 ignobel_imperial_06-scaled-10002 ignobel_imperial_05-scaled-10002 ignobel_imperial_04-scaled-10002 ignobel_imperial_03-scaled-10002 ignobel_imperial_02-scaled-10002 ignobel_imperial_01-scaled-10001


Read me...

Sir Isaac Newton (Parte II) - Quantum Tunnel en Español

Portrait of Isaac Newton.
Image via Wikipedia

Puedes descargar este podcast en iTunesFeedburner.

En el episodio anterior mencionábamos que en 1669 Newton experimentó lo que podríamos llamar un año de genialidad durante el cual realizó algunos de los más notables descubrimientos en la historia de la ciencia, sin embargo no siempre estaba interesado en hacer dichos descubrimientos públicos.

Para finales de la década de 1670 Newton tuvo un nuevo periodo de tensión con la ciencia y se dedicó a otras cosas. Así pues para ese entonces no había poblicado todavía nada acerca de la dinámica o la gravedad, y por tanto una gran cantidad de descubrimientos se hallaban acumulando polvo en su escritorio. Finalmente, picado y enfadado por críticas por parte de Robert Hooke y diplomáticamente mediadas por Edmund Halley, Newton volvió su atención a problemas científicos e inició a escribir su opera prima: Los Principia Mathematicae. Los Principia fueron escritos en 18 meses de concentración total y cuando finalmente el libro fuese publicado en 1687 fue inmediatamente reconocido como uno de los mayores logros de la mente humana. En su obra plasmó los principios básicos de la teoría de la mecánica y de la dinámica de fluidos, dio el primer tratamiento matemático del movimiento ondulatorio, dedujo las leyes de Kepler a partir del cuadrado inverso de la ley de la gravitación y explicó la órbita de los cometas, calculó la masa de la Tierra, del Sol y de los planetas, tomó en cuenta la aparente superficie plana de la Tierra para poder explicar la precesión de los equinoccios, y fundó la teoría de las mareas, entre otras cosas.

Los Principia Mathematicae siempre ha sido un libro difícil de leer, puesto que esta escrito en un estilo frío y remoto, tal vez bastante apropiado para la grandeza de los temas que aborda. Además la densa cantidad de matemáticas empleada consiste casi únicamente en geometría clásica, la cual era muy poco cultivada en ese entonces, y lo es mucho menso hoy en día.

Después de la gran actividad llevada acabo en la creación de los Principia, Newton una vez más dejó de lado la ciencia. En 1696 dejó Cambridge y se trasladó a Londres para convertirse en Jefe de la Casa de Moneda. Durante el resto de su vida entró poco en la vida en sociedad pero tuvo oportunidad de disfrutar de su posición única en la cima de su fama científica. Estos cambios en sus intereses y en su ambiente no hicieron que disminuyeran sus poderes intelectuales. Por ejemplo, una tarde, al final de un arduo día de trabajo acuñando monedas escuchó acerca de el problema de la braquistrocrona propuesto por Johann Bernoulli quien lo describió como un problema para los más agudos intelectos matemáticos del mundo entero, y así Newton lo resolvió esa misma tarde antes de ir a dormir.

De gran interés para la ciencia es también su publicación de Opticks en 1704. En este libro asimiló y extendió su trabajo acerca de la luz y el color. Como apéndice agregó sus famosas Queries o Cuestiones, que son especulaciones en áreas de la ciencia que se encuentran mucho más allá del entendimiento científico en aquel entonces. Muchas de estas cuestiones tienen que ver con la preocupación constante que Newton tenía para con la química (o alquimia como se le llamaba en su tiempo). Así pues formuló varias conclusiones tentativas pero largamente consideradas, siempre fundamentadas en experimentos, acerca de la probable naturaleza de la materia. Y aunque el probar sus ideas tuvo que esperar la llegada del refinado trabajo experimental de finales del siglo XIX y principios del XX, sus ideas generales han sido corroboradas al menos en cuanto a nociones generales se refiere.

Newton ha sido siempre considerado y descrito como el estereotipo del racionalista, como la personificación de la Edad de la Razón. Tal vez sería más preciso pensar acerca de él en términos medievales - como un místico intuitivo, consagrado y solitario, para quien la ciencia y las matemáticas eran herramientas para descubrir los misterios del Universo.


Una clave del cáncer de mama es hallada

Expertos en cáncer han identificado un gen que causa una forma particularmente agresiva de cancer de mama. El nombre que se ha dado a este oncogen is ZNF703 y se encuentra sobreactivado en uno de cada doce canceres de mama. Científicos trabajando para Cancer Research UK llevaron a cabo la investigación y mencionan que el gen era uno de los candidatos clave para el desarrollo de nuevas medicinas contra el cáncer de mama. El estudio fue publicado en la revista EMBO Molecular Medicine.

Físicos ponen en reversa al laser

Creo que la gran mayoría de nosotros está familiarizado con la luz laser y por tanto parecería un tanto extraño el pensar en un laser que absorbe un rayo brillante en vez de emitirlo. Sin embargo, científicos de la Universidad de Yale han recientemente reportado en la revista Science el desarrollo de un aparato que convierte haces laser en calor.
Cao y sus colegas utilizaron una oblea de silicón y un laser infrarojo sintonizable para sus experimentos. Lo que hacen es dividir el haz laser en dos e iluminan con él ambos lados de la oblea de silicón. La parte anterior y posterior de la oblea funcionan como espejos, mientras que el silicón en medio juega las veces del medio dentro de una cavidad laser. Al cambiar la frecuencia del laser, así como otras propiedades, los fotones son atrapados entre las superficies de la oblea. Mientras los fotones rebotan entre las superficies, el silicón los absorbe hasta que todos desaparecen y son convertidos en calor.

Oliendo vibraciones cuánticas

Una de las teorías más arraigadas acerca de la percepción de olores es que las formas de las diferentes moléculas proveen las pistas que nuestros cerebros registran como olores. Sin embargo, se ha reportado recientemente que algunas moscas de la fruta pueden distinguir entre dos moléculas con formas idénticas, lo cual nos da la primera evidencia experimental que soporta la teoría de que el sentido del olfato opera detectando vibraciones moleculares.

Efthimios Skoulakis del Alexander Fleming Biomedical Sciences Research Center en Vari, Grecia, llevó a cabo los experimentos con moscas de la fruta. El equipo inicialmente puso a las moscas en un laberinto y las dejaron escoger entre dos ramas, una contenía un químico con fragancia tal como acetophenon, un ingrediente común en perfumes, y la otra una versión deuterada. Si las moscas estuvieran detectando olores basándose en forma únicamente, entonces no podrían diferenciar entre ambas ramas. Los científicos encontraron que las moscas preferían el acetophenon ordinario.

Projecto de prese brasileña es bloqueado

En el episodio anterior reportamos la aprobación para la construcción de una controversial presa en el Amazonas, la planta hidroeléctrica de Belo Horizonte, la cual es la tercera planta de su tipo en el mundo. Los planes han sido suspendidos por un juez brasileño debido a asuntos ambientales.

El juez Ronaldo Desterro detuvo los planes de construcción puesto que no cumplian con los requerimientos ambientales debidos, asimismo, el banco nacional de desarrollo tiene prohibido financiar el proyecto. La licencia de construcción fue otorgada en Enero.

Conferencia de Comunicación de la Ciencia en Londres

La British Science Association anunció recientemente su congreso anual de dos días acerca de la comunicación de la ciencia. El evento tiene como objetivo el abordar algunos de los temas principales que enfrentan los comunicadores de la ciencia en el Reino Unido. El evento se llevará acabo los días 25 y 26 de Mayo en King's Place, King's Cross, en Londres. El tema principal de la conferencia será "dialogo en línea" y se exploraran usos innovativos de los medios en línea para establecer dialogo entre el público y la ciencia. El registro para la conferencia abrió el 14 de Febrero y cerrará el 13 de Mayo. Para mayor información dirijanse a la página de internet de la asociación.

Read me...

The Large Hadron Collider - Quantum Tunnel Podcast

Large Hadron Collider tunnel and dipole magnets.
Image via Wikipedia

You can download this podcast in iTunesFeedburner.

The Large Hadron Collider is located 300 feet underneath the French-Swiss border outside Geneva and is the world's biggest and most expensive particle accelerator. It is designed to accelerate  protons to energies of 7 trillion electron volts and then smash them together to recreate the conditions that last prevailed when the universe was less than a trillionth of a second old.

The collider started smashing particles on March 30th, 2010, after 16 years and $10 billion. The new hadron collider will take physics into a realm of energy and time where the current reigning theories simply do not apply, corresponding to an era when cosmologists think that the universe was still differentiating itself, evolving from a primordial blandness and endless potential into the forces and particles that constitute modern reality.

One prime target is a particle called the Higgs boson that is thought to endow other particles with mass, according to the reigning theory of particle physics, known as the Standard Model.

The LHC is part of CERN, which born amid vineyards and farmland in the countryside outside Geneva in 1954 out of the rubble of postwar Europe. It had a twofold mission of rebuilding European science and of having European countries work together. Today, it has 20 countries as members. It was here that the World Wide Web was born in the early 1990s. The lab came into its own scientifically in the early 80s, when Carlo Rubbia and Simon van der Meer won the Nobel Prize by colliding protons and antiprotons there to produce the particles known as the W and Z bosons, which are responsible for the so-called weak nuclear force that causes some radioactive decays.

Bosons are quanta that, according to the rules of quantum mechanics transmit forces as they are tossed back and forth in a sort of game of catch between matter particles. The W's and Z's are closely related to photons, which transmit electromagnetic forces, or light.

The innings of the collider are some 1,232 electromagnets, weighing in at 35 tons apiece, strung together like an endless train stretching around the gentle curve of the CERN tunnel. In order to bend 7-trillion-electron-volt protons around in such a tight circle these magnets have to produce magnetic fields of 8.36 Tesla, more than 100,000 times the Earth's field, requiring in turn a current of 13,000 amperes through the magnet's coils. To make this possible the entire ring is bathed in 128 tons of liquid helium to keep it cooled to 1.9 degrees Kelvin, at which temperature the niobium-titanium cables are superconducting and pass the current without resistance.

Running through the core of this train, surrounded by magnets and cold, are two vacuum pipes, one for protons going clockwise, the other counterclockwise. Traveling in tight bunches along the twin beams, the protons will cross each other at four points around the ring, 30 million times a second. During each of these violent crossings, physicists expect that about 20 protons, or the parts thereof - quarks or gluons - will actually collide and spit fire.

Two of the detectors are specialized. One, called Alice, is designed to study a sort of primordial fluid, called a quark-gluon plasma, that is created when the collider smashes together lead nuclei. The other, LHCb, will hunt for subtle differences in matter and antimatter that could help explain how the universe, which was presumably born with equal amounts of both, came to be dominated by matter.

The other two, known as Atlas and the Compact Muon Solenoid, or C.M.S. for short, are the designated rival workhorses of the collider, designed expressly to capture and measure every last spray of particle and spark of energy from the proton collisions.


Key breast cancer driver found

Cancer experts have identified a gene which can cause a particularly aggressive form of breast cancer to develop. The name given to this new oncogene is ZNF703 and it is overactive in one of 12 breast cancers. Scientists working for Cancer Research UK carried out the research and they mention that the gene was “prime candidate” for the development of new breast cancer drugs. The study was published in the EMBO Molecular Medicine Journal.

Physicists reverse the laser

We are very familiar with laser light, and as such it would seem very odd to thing about a laser that sucks in a bright beam rather than emitting it. However, scientists from Yale have recently reported in Science the development of a device that converts laser beams into heat.

Cao and co-workers uses a 110-micrometre silicon wafer and a tunable infraread laser in their experiments. They split the laser beam into two and shine it into both sides of the silicon wafer. The front and back of the silicon slice act as mirrors and the silicon in between would be similar to the medium inside a laser cavity. By tuning the frequency of the incoming laser beam as well as other properties, the photons are trapped between the surfaces of the silicon. As the photons bounce back and forth, the silicon absorbs them until all photons are sucked up by the device and converted into heat.

Smelling quantum vibrations

It has been widely believed that the different shapes of molecules provide the clues that our brain registers as smells. However, it has recently been reported that some fruit flies can distinguish between two molecules with identical shapes, providing the first experimental evidence to support a controversial theory that the sense of smell can operate by detecting molecular vibrations.

Efthimios Skoulakis of the Alexander Fleming Biomedical Sciences Research Center in Vari, Greece, carried out the experiments on fruit flies. The team initially placed fruit flies in a simple maze that let them choose between two arms, one containing a fragrant chemical such as acetophenone, a common perfume ingredient, the other containing a deuterated version. If the flies were sensing odours using shape alone, they should not be able to tell the difference between the two. In fact, the researchers found that flies preferred ordinary acetophenone.

Brazilian dam project blocked

In the previous episode we reported on the approval of the construction of a controversial dam in the Amazon, the Belo Horizonte hydroelectric plant, the third largest plant of it's kind in the world. The plans have now been suspended by a Brazilian judge over environmental concerns.

Judge Ronaldo Desterro halted the plans for the construction because environmental requirements have not been met, also, the national development bank has been prohibited from financing the project.

Science Communication Conference in London

The British Science Association has recently announced its annual two-day Science Communication Conference. The event aims to address some of the key issues facing science communicators in the UK. In order to do that, the conference brings together people involved in public engagement with a range of backgrounds including scientists, charities, universities, press offices and policymakers.
The event will take place on the 25th and 26th of May at King's Place in King's Cross in London.  Registration opened on February 14 and will close on May 13th. For more information please visit their website.

Read me...

This has made my boxing day - Darwin Ichthys, with feet and all...

This has made my boxing day - Darwin Ichthys, with feet and all...


Read me...

Winter Solstice Lunar Eclipse

William Castleman shot an amazing time lapse video of last night’s Winter Solstice Lunar Eclipse on the darkest day in 372 years. It was shot from Gainesville, Florida from 1:10 AM EST (6:10 GMT) to 5:03 AM EST (10:03 GMT).

Read me...

The illustrated guide to a Ph.D.

Imagine a circle that contains all of human knowledge:PhDKnowledge.001

By the time you finish elementary school, you know a little:PhDKnowledge.002

By the time you finish high school, you know a bit more:PhDKnowledge.003

With a bachelor's degree, you gain a specialty:PhDKnowledge.004

A master's degree deepens that specialty:PhDKnowledge.005

Reading research papers takes you to the edge of human knowledge:PhDKnowledge.006

Once you're at the boundary, you focus:PhDKnowledge.007

You push at the boundary for a few years:PhDKnowledge.008

Until one day, the boundary gives way:PhDKnowledge.009

And, that dent you've made is called a Ph.D.:PhDKnowledge.010

Of course, the world looks different to you now:PhDKnowledge.011

So, don't forget the bigger picture:PhDKnowledge.012

Keep pushing.

Reproduced under the Creative Commons License from Matt Might http://matt.might.net/articles/phd-school-in-pictures/

Read me...

The known Universe

If you ever doubt the beauty of the little corner of the Universe in which we live, you just have to take a look at this film. It was developed by the American Museum of Natural History.

It shows us the known Universe as mapped through astronomical observations. Enjoy!

Read me...

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: