Standing for IoP Council Elections

The Institute of Physics is holding Council Elections and I am proud to announce that I am standing for election as President-Elect. It is an honour to be standing alongside three other candidates, including one of my very own PhD supervisors… 🤣

I am standing for election because I am interested in making physics a subject that is approachable and enjoyable to all. In order to do that, being able to speak on behalf of the different sectors that make up our community is important. I am interested in contributing to the diversity and inclusion agenda of the IoP. I firmly believe that having representatives with varied backgrounds makes our community stronger. 🤛🏼

Programming First Steps: Java or Python?

I am posting this as an extended response to the quesiton that a medic friend of mine asked me the other day. The main question was whether learning Java or Python was best for getting “a basic understanding of coding (medically related)” and “which one will serve best in the future within a remit of medical apps“.

The short answers is that it depends… and my answer is not only valid for medical applications, but in general. I believe that both are very good popular programming languages. Learning the basics of programming can be done in any programming language of your choice, including both Python and Java. If the aim is to get to grips with what programming is all about even getting to learn a bit of Scratch or Blocky could be a good start. I recommend looking as well at Swift Playgrounds. They all will let you have a look at the basics of programming and will get you started in an easy way.

In this case the query is mor nuance and the answer is also “it depends”. For example are there any other people around you who are already using a particualr programming language for their medical (or any other area of knowledge) application? For example, when doing a PhD I recommend you look at what other candidates are using for their work and stick to it. The reason for this is that if you have any questions there are people around you that may be able to support you. Also you may end up working with them and programming in the same language helps. In this case, if there are a number of medics that are already getting their hands dirty with one particular programming language I’d say go for that. And notice that it does not have to be neither Java nor Python.

In short you can’t go wrong with picking one of them to start with. Once you start, you may pick up the other with less trepidation.

If you are stuck, I would probably recommend using Python, but then again I may be biased. After all, I have written a couple of books for pythonistas, ok, ok I have also written one for Matlab. That is also an excellent language although not open source. In that case take a look at Octave… but I digress.

In the rest of the post I will look at some of the similarities and differences between Python and Java in the hope this may help you decide.

Learning curve

The learning curve for Java and Python is very diffterent. I believe that Python is a much easier language to get started. However, once you have picked up the basics in any of them you can contribute to production level code quite easily. Both languages are object oriented and depending on your level of knowledge you may be able to read through a program an figure out what it is doing.

The learning curve for anything depends on what you already know, how interested you are in learning the topic, and the learning environment. For example, if you have already done some type of coding or scripting, even if it is pasting some JavaScript into a web page, you may be familiar with the code structure you will run into with a language like Java. Here is an example of Java code. Let us look at a Hello World program in Java:

class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello, World!"); 
    }
}

What about Python? Well take a look:

print("Hello, world!")

Readability is part of the philosophy of writing Python code and we can see that in the example above. In this way, if you have never programmed before, Python tends to be easier to read.

Syntax

Syntax refers to the rules that we need to follow to write correct “sentences” in the language. Java’s syntax requires a bit more effort than Python’s. Let us take a look. The following program calculates the averge of some a collection of numbers in Java

public class Average {
    public static void main(String args[]){
       int i,total;
       int a[] = {0,6,9,2,7};
       int n = 5;
       total = 0;
 ​
       for(i=0; i<n; i++) {
          total += a[i];
      }
       System.out.println("Average :"+ total/(float)n);
    }
 
  • Curly braces define the blocks of code.
  • Each statement must end in a semicolon (;)
  • Each time you create a new variable, it must have a type. When we instantiate the i and total objects we define them as int and later on casted the value of n as a float to be able to obtain a decimal number
  • Formatting and spacing is not important. Although the code above looks nice, the programme will run even if all of it were in a single line (don’t do that…)(
  • You will also notice how verbose the code is. You will usually end up typing more writing Java code than you would with Python code.

In Python we can calculate the average with something like this:

def average(a):
   avg = sum(a)/len(a)
   return avg
   
 a = [0,6,9,2,7]
 avg = average(a)
 print("Average : {0}".format(avg))
  • Line breaks and indentation define blocks of code in Python. There are no extra symbols like semicolons at the end of a line
  • Python uses a colon to start classes, methods, and loops. You see that in the definition of average
  • Whitespace is important to Python. Developers use it to define blocks of code, so the lines in the code above could not run on one line.

Executing code

A big difference between Java and Python is how both languages execute code. Java is a compiled language and this means that the code needs to be “translated” so that the machine can run it. Python is an interpreted language and this means that the code is executed line by line without the need for compilation.

If you are interested in performance, the distinction above means that Python could be a bit slower that Java, but I think that for the type of programming you may start with this does not matter all that much.

In a nutshell

If you are interested in learning more about programming, either of them should be able to get you started. There may be a number of pros and cones to each languaage and I would recommend you ask colleagues what they are using for the type os applications you are interested in. All in all it does not matter which one you chose. My recommendation is to stick with your choice and in no time you will pick up the nunances and idiosyncrasies of the language.

Sci-Advent – ‘Electronic amoeba’ finds approximate solution to traveling salesman problem in linear time

Researchers at Hokkaido University and Amoeba Energy in Japan have, inspired by the efficient foraging behavior of a single-celled amoeba, developed an analog computer for finding a reliable and swift solution to the traveling salesman problem — a representative combinatorial optimization problem.

Amoeba-inspired analog electronic computing system integrating resistance crossbar for solving the travelling salesman problem. Scientific Reports, 2020; 10 (1) DOI: 10.1038/s41598-020-77617-7

Many real-world application tasks such as planning and scheduling in logistics and automation are mathematically formulated as combinatorial optimization problems. Conventional digital computers, including supercomputers, are inadequate to solve these complex problems in practically permissible time as the number of candidate solutions they need to evaluate increases exponentially with the problem size — also known as combinatorial explosion. Thus new computers called “Ising machines,” including “quantum annealers,” have been actively developed in recent years. These machines, however, require complicated pre-processing to convert each task to the form they can handle and have a risk of presenting illegal solutions that do not meet some constraints and requests, resulting in major obstacles to the practical applications.

These obstacles can be avoided using the newly developed “electronic amoeba,” an analog computer inspired by a single-celled amoeboid organism. The amoeba is known to maximize nutrient acquisition efficiently by deforming its body. It has shown to find an approximate solution to the traveling salesman problem (TSP), i.e., given a map of a certain number of cities, the problem is to find the shortest route for visiting each city exactly once and returning to the starting city. This finding inspired Professor Seiya Kasai at Hokkaido University to mimic the dynamics of the amoeba electronically using an analog circuit, as described in the journal Scientific Reports. “The amoeba core searches for a solution under the electronic environment where resistance values at intersections of crossbars represent constraints and requests of the TSP,” says Kasai. Using the crossbars, the city layout can be easily altered by updating the resistance values without complicated pre-processing.

Kenta Saito, a PhD student in Kasai’s lab, fabricated the circuit on a breadboard and succeeded in finding the shortest route for the 4-city TSP. He evaluated the performance for larger-sized problems using a circuit simulator. Then the circuit reliably found a high-quality legal solution with a significantly shorter route length than the average length obtained by the random sampling. Moreover, the time required to find a high-quality legal solution grew only linearly to the numbers of cities. Comparing the search time with a representative TSP algorithm “2-opt,” the electronic amoeba becomes more advantageous as the number of cities increases. “The analog circuit reproduces well the unique and efficient optimization capability of the amoeba, which the organism has acquired through natural selection,” says Kasai.

“As the analog computer consists of a simple and compact circuit, it can tackle many real-world problems in which inputs, constraints, and requests dynamically change and can be embedded into IoT devices as a power-saving microchip,” says Masashi Aono who leads Amoeba Energy to promote the practical use of the amoeba-inspired computers.

This is a Joint Release between Hokkaido University and Amoeba Energy Co., Ltd. More information

Sci-Advent – New superhighway system discovered in the Solar System

Researchers have discovered a new superhighway network to travel through the Solar System much faster than was previously possible. Such routes can drive comets and asteroids near Jupiter to Neptune’s distance in under a decade and to 100 astronomical units in less than a century. They could be used to send spacecraft to the far reaches of our planetary system relatively fast, and to monitor and understand near-Earth objects that might collide with our planet.

The arches of chaos in the Solar System. Science Advances, 2020; 6 (48): eabd1313 DOI: 10.1126/sciadv.abd1313

In their paper, published in the Nov. 25 issue of Science Advances, the researchers observed the dynamical structure of these routes, forming a connected series of arches inside what’s known as space manifolds that extend from the asteroid belt to Uranus and beyond. This newly discovered “celestial autobahn” or “celestial highway” acts over several decades, as opposed to the hundreds of thousands or millions of years that usually characterize Solar System dynamics.

The most conspicuous arch structures are linked to Jupiter and the strong gravitational forces it exerts. The population of Jupiter-family comets (comets having orbital periods of 20 years) as well as small-size solar system bodies known as Centaurs, are controlled by such manifolds on unprecedented time scales. Some of these bodies will end up colliding with Jupiter or being ejected from the Solar System.

The structures were resolved by gathering numerical data about millions of orbits in our Solar System and computing how these orbits fit within already-known space manifolds. The results need to be studied further, both to determine how they could be used by spacecraft, or how such manifolds behave in the vicinity of the Earth, controlling the asteroid and meteorite encounters, as well as the growing population of artificial human-made objects in the Earth-Moon system.

Sci-Advent – Trends in prevalence of blindness and distance and near vision impairment over 30 years

Keeping up with the Sci-advent post from yesterday about vision and optics, this report from the University of Michigan is relevant news. Researchers say eye care accessibility around the globe isn’t keeping up with an aging population, posing challenges for eye care professionals over the next 30 years.

As the global population grows and ages, so does their need for eye care. But according to two new studies published in The Lancet Global Health, these needs aren’t being met relative to international targets to reduce avoidable vision loss.

As 2020 comes to a close, an international group of researchers set out to provide updated estimates on the number of people that are blind or visually impaired across the globe, to identify the predominant causes, and to illustrate epidemiological trends over the last 30 years.

“This is important because when we think about setting a public health agenda, knowing the prevalence of an impairment, what causes it, and where in the world it’s most common informs the actions that key decision makers like the WHO and ministries of health take to allocate limited resources,” says Joshua Ehrlich, M.D., M.P.H., a study author and ophthalmologist at Kellogg Eye Center.

The study team assesses a collection of secondary data every five years, undertaking a meta-analysis of population-based surveys of eye disease assembled by the Vision Loss Expert Group and spanning from 1980 to 2018.

Creating a blueprint

A study like this poses challenges since regional populations vary in age.

“For example, the population in some Asian and European countries is much older on average than the population in many African nations. Many populations are also growing older over time. A direct comparison of the percentage of the population with blindness or vision impairment wouldn’t paint a complete picture” says Ehrlich, who is also a member of University of Michigan’s Institute for Healthcare Policy and Innovation, explains.

To address this issue, the study looked at age-standardized prevalence, accomplished by adjusting regional populations to fit a standard age structure.

“We found that the age-standardized prevalence is decreasing around the world, which tells us eye care systems and quality of care are getting better,” says study author Monte A. Del Monte, M.D., a pediatric ophthalmologist at Kellogg Eye Center. “However, as populations age, a larger number of people are being affected by serious vision impairment, suggesting we need to improve accessibility to care and further develop human resources to provide care.”

In fact, the researchers found that there wasn’t any significant reduction in the number of people with treatable vision loss in the last ten years, which paled in comparison to the World Health Assembly Global Action Plan target of a 25% global reduction of avoidable vision loss in this same time frame.

Although findings varied by region globally, cataracts and the unmet need for glasses were the most prevalent causes of moderate to severe vision impairment. Approximately 45% of the 33.6 million cases of global blindness were caused by cataracts, which can be treated with surgery.

Refractive error, which causes a blurred image resulting from an abnormal shape of the cornea and lens not bending light correctly, accounted for vision loss in 86 million people across the globe. This largest contributor to moderate or severely impaired vision can be easily treated with glasses.

Also important, vision impairment due to diabetic retinopathy, a complication of diabetes that affects eyesight, was found to have increased in global prevalence.

“This is another condition in which we can prevent vision loss with early screenings and intervention,” says study author Alan L. Robin, M.D., a collaborating ophthalmologist at Kellogg Eye Center and professor at Johns Hopkins Medicine. “As diabetes becomes more common across the globe, this condition may begin to affect younger populations, as well.”

Looking to 2050

“Working as a global eye care community, we need to now look at the next 30 years,” Ehrlich says. “We hope to take these findings and create implementable strategies with our global partners through our Kellogg Eye Center for International Ophthalmology so fewer people go blind unnecessarily.”

In an effort to contribute to the WHO initiative VISION 2020: The Right to Sight, the researchers updated estimates of the global burden of vision loss and provided predictions for what the year 2050 may look like.

They found that the majority of the 43.9 million people blind globally are women. Women also make up the majority of the 295 million people who have moderate to severe vision loss, the 163 million who have mild vision loss and the 510 million who have visual impairments related to the unmet need for glasses, specifically poor near vision.

By 2050, Ehrlich, Del Monte, and Robin predict 61 million people will be blind, 474 million will have moderate and severe vision loss, 360 million will have mild vision loss and 866 million will have visual impairments related to farsightedness.

“Eliminating preventable blindness globally isn’t keeping pace with the global population’s needs,” Ehrlich says. “We face enormous challenges in treating and preventing vision impairment as the global population grows and ages, but I’m optimistic of a future where we will succeed because of the measures we take now to make a difference.”

Both studies were funded by Brien Holden Vision Institute, Fondation Théa, Fred Hollows Foundation, Bill & Melinda Gates Foundation, Lions Clubs International Foundation, Sightsavers International and the University of Heidelberg.

GBD 2019 Blindness and Vision Impairment Collaborators, on behalf of theVision Loss Expert Group of the Global Burden of Disease Study. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: the Right to Sight: an analysis for the Global Burden of Disease Study. The Lancet Global Health, 2020; DOI: 10.1016/S2214-109X(20)30489-7

Sci-Advent – Physicists Nail Down the ‘Magic Number’ That Shapes the Universe

This is a reblog of the article in Nautilus by Natalie Wolchover. See the original here.

A team in Paris has made the most precise measurement yet of the fine-structure constant, killing hopes for a new force of nature.

As fundamental constants go, the speed of light, c, enjoys all the fame, yet c’s numerical value says nothing about nature; it differs depending on whether it’s measured in meters per second or miles per hour. The fine-structure constant, by contrast, has no dimensions or units. It’s a pure number that shapes the universe to an astonishing degree — “a magic number that comes to us with no understanding,” as Richard Feynman described it. Paul Dirac considered the origin of the number “the most fundamental unsolved problem of physics.”

Numerically, the fine-structure constant, denoted by the Greek letter α (alpha), comes very close to the ratio 1/137. It commonly appears in formulas governing light and matter. “It’s like in architecture, there’s the golden ratio,” said Eric Cornell, a Nobel Prize-winning physicist at the University of Colorado, Boulder and the National Institute of Standards and Technology. “In the physics of low-energy matter — atoms, molecules, chemistry, biology — there’s always a ratio” of bigger things to smaller things, he said. “Those ratios tend to be powers of the fine-structure constant.”

The constant is everywhere because it characterizes the strength of the electromagnetic force affecting charged particles such as electrons and protons. “In our everyday world, everything is either gravity or electromagnetism. And that’s why alpha is so important,” said Holger Müller, a physicist at the University of California, Berkeley. Because 1/137 is small, electromagnetism is weak; as a consequence, charged particles form airy atoms whose electrons orbit at a distance and easily hop away, enabling chemical bonds. On the other hand, the constant is also just big enough: Physicists have argued that if it were something like 1/138, stars would not be able to create carbon, and life as we know it wouldn’t exist.

Physicists have more or less given up on a century-old obsession over where alpha’s particular value comes from; they now acknowledge that the fundamental constants could be random, decided in cosmic dice rolls during the universe’s birth. But a new goal has taken over.

Physicists want to measure the fine-structure constant as precisely as possible. Because it’s so ubiquitous, measuring it precisely allows them to test their theory of the interrelationships between elementary particles — the majestic set of equations known as the Standard Model of particle physics. Any discrepancy between ultra-precise measurements of related quantities could point to novel particles or effects not accounted for by the standard equations. Cornell calls these kinds of precision measurements a third way of experimentally discovering the fundamental workings of the universe, along with particle colliders and telescopes.

Today, in a new paper in the journal Nature, a team of four physicists led by Saïda Guellati-Khélifa at the Kastler Brossel Laboratory in Paris reported the most precise measurement yet of the fine-structure constant. The team measured the constant’s value to the 11th decimal place, reporting that α = 1/137.03599920611. (The last two digits are uncertain.)

With a margin of error of just 81 parts per trillion, the new measurement is nearly three times more precise than the previous best measurement in 2018 by Müller’s group at Berkeley, the main competition. (Guellati-Khélifa made the most precise measurement before Müller’s in 2011.) Müller said of his rival’s new measurement of alpha, “A factor of three is a big deal. Let’s not be shy about calling this a big accomplishment.”

Guellati-Khélifa has been improving her experiment for the past 22 years. She gauges the fine-structure constant by measuring how strongly rubidium atoms recoil when they absorb a photon. (Müller does the same with cesium atoms.) The recoil velocity reveals how heavy rubidium atoms are — the hardest factor to gauge in a simple formula for the fine-structure constant. “It’s always the least accurate measurement that’s the bottleneck, so any improvement in that leads to an improvement in the fine-structure constant,” Müller explained.

The Paris experimenters begin by cooling the rubidium atoms almost to absolute zero, then dropping them in a vacuum chamber. As the cloud of atoms falls, the researchers use laser pulses to put the atoms in a quantum superposition of two states — kicked by a photon and not kicked. The two possible versions of each atom travel on separate trajectories until more laser pulses bring the halves of the superposition back together. The more an atom recoils when kicked by light, the more out of phase it is with the unkicked version of itself. The researchers measure this difference to reveal the atoms’ recoil velocity. “From the recoil velocity, we extract the mass of the atom, and the mass of the atom is directly involved in the determination of the fine-structure constant,” Guellati-Khélifa said.

In such precise experiments, every detail matters. Table 1 of the new paper is an “error budget” listing 16 sources of error and uncertainty that affect the final measurement. These include gravity and the Coriolis force created by Earth’s rotation — both painstakingly quantified and compensated for. Much of the error budget comes from foibles of the laser, which the researchers have spent years perfecting.

For Guellati-Khélifa, the hardest part is knowing when to stop and publish. She and her team stopped the week of February 17, 2020, just as the coronavirus was gaining a foothold in France. Asked whether deciding to publish is like an artist deciding that a painting is finished, Guellati-Khélifa said, “Exactly. Exactly. Exactly.”

Surprisingly, her new measurement differs from Müller’s 2018 result in the seventh digit, a bigger discrepancy than the margin of error of either measurement. This means — barring some fundamental difference between rubidium and cesium — that one or both of the measurements has an unaccounted-for error. The Paris group’s measurement is the more precise, so it takes precedence for now, but both groups will improve their setups and try again.

Though the two measurements differ, they closely match the value of alpha inferred from precise measurements of the electron’s g-factor, a constant related to its magnetic moment, or the torque that the electron experiences in a magnetic field. “You can connect the fine-structure constant to the g-factor with a hell of a lot of math,” said Cornell. “If there are any physical effects missing from the equations , we would be getting the answer wrong.”

Instead, the measurements match beautifully, largely ruling out some proposals for new particles. The agreement between the best g-factor measurements and Müller’s 2018 measurement was hailed as the Standard Model’s greatest triumph. Guellati-Khélifa’s new result is an even better match. “It’s the most precise agreement between theory and experiment,” she said.

And yet she and Müller have both set about making further improvements. The Berkeley team has switched to a new laser with a broader beam (allowing it to strike their cloud of cesium atoms more evenly), while the Paris team plans to replace their vacuum chamber, among other things.

What kind of person puts such a vast effort into such scant improvements? Guellati-Khélifa named three traits: “You have to be rigorous, passionate and honest with yourself.” Müller said in response to the same question, “I think it’s exciting because I love building shiny nice machines. And I love applying them to something important.” He noted that no one can single-handedly build a high-energy collider like Europe’s Large Hadron Collider. But by constructing an ultra-precise instrument rather than a super-energetic one, Müller said, “you can do measurements relevant to fundamental physics, but with three or four people.”

Quantum magic squares

Quantum magic squares

In a new paper in the Journal of Mathematical Physics, Tim Netzer and Tom Drescher from the Department of Mathematics and Gemma De las Cuevas from the Department of Theoretical Physics have introduced the notion of the quantum magic square, which is a magic square but instead of numbers one puts in matrices.

This is a non-commutative, and thus quantum, generalization of a magic square. The authors show that quantum magic squares cannot be as easily characterized as their “classical” cousins. More precisely, quantum magic squares are not convex combinations of quantum permutation matrices. “They are richer and more complicated to understand,” explains Tom Drescher. “This is the general theme when generalizations to the non-commutative case are studied. Check out the paper!

Quantum magic squares: Dilations and their limitations: Journal of Mathematical Physics: Vol 61, No 11
— Read on aip.scitation.org/doi/10.1063/5.0022344

2020 Nobel Prize in Physics – Black holes

I had intended to post this much ealier on, and certainly closer to the actual announcement of the Nobel Prizes in early October. It has however been a very busy period. Better late than never, right?

I was very pleased to see that the winners of the 2020 Nobel Prize in Physics were a group that combined the observational with the theoretical. Sir Roger Penrose, Reinhard Genzel, and Andrea Ghez are the recipients of the 2020 Nobel Prize in Physics. Penrose receives half the 10 million Swedish krona while Ghez and Genzel will share the other half.

Penrose’s work has taken the concept of black holes from the realm of speculation to a sound theoretical idea underpinning modern astrophysics. With the use of topology and general relativity, Penrose has provided us with an explanation to the collapse of matter due to gravity leading to the singularity at the centre of a black hole.

A few decades after the 1960’s work from Penrose we have Genzel and Ghez whose independent work using adaptive optics and speckle imaging enabled them to analyse the motion of stars tightly orbiting Sagittarius A*. Their work led to the conclusion that the only explanation for the radio source at the centre of the Milky Way’s was a black hole.

Ghez is the fourth woman to be named a Nobel physics laureate, after Donna Strickland (2018), Maria Goeppert Mayer (1963), and Marie Curie (1903).

From an Oddity to an Observation

In 1916 Karl Schwarzwild described a solution to Einstein’s field equation for the curved spacetime around a mass of radius r. Some terms in the solution either diverged or vanished for r=\frac{2GM}{c} or r=0. A couple of decades later, Oppenheimer and his student Hartland Snyder realised that the former value corresponded to the radius within which light, under the influence of gravity, would no longer be able to reach outside observers – the so called event horizon. Their work would need more than mathematical assumptions to be accepted.

By 1964 Penrose came up with topological picture of the gravitational collapse described and crucially doing so without the assumptions made by Oppenheimer and Snyder. His work required instead the idea of a trapped surface. In other words a 2D surface in which all light orthogonal to it converges. Penrose’s work showed that inside the event horizon, the radial direction becomes time-like. It is impossible to reverse out of the black hole and the implication is that all matter ends up at the singularity. Penrose’s research established black holes as plausible explanation for objets such s quasars and other active galactic nuclei.

Closer to Home

Although our own galaxy is by no means spewing energy like your average quasar, it still emits X-rays and other radio signals. Could it be that there is a black hole-like object at the heart of the Milky Way? This was a question that Genzel and Ghez would come to answer in time.

With the use of infrared (IR) spectroscopy, studies of gas clouds near the galactic centre showed rising velocities with decreasing distances to the centre, suggesting the presence of a massive, compact source of gravitation. These studies in the 1980s were not definitive but provided a tantalising possibility.

In the mid 1990s, both Genzel and Ghez set out to obtain better evidence with the help of large telescopes operating in the near-IR to detect photons escaping the galactic center. Genzel and colleagues began observing from Chile, whereas Ghez and her team from Hawaii.

Their independent development of speckle imaging, a technique that corrects for the distortions caused by Earth’s atmosphere enabled them to make the crucial observations. The technique improves the images by stacking a series of exposures, bringing the smeared light of individual stars into alignment. In 1997, both groups published their measurements stars movements strongly favouring the black hole explanation.

Further to that work, the use of adaptive optics by both laureates not only improved the resolutions obtained, but also provided the possibility of carrying out spectroscopic analyses which enabled them to get velocities in 3D and therefore obtain precise orbits.

The “star” object in this saga is the so-called S0-2 (Ghez’s group) or S2 (Genzel’s group) star. It approaches within about 17 light-hours of Sagittarius A* every 16 years in a highly elliptical orbit.

Congratulations to Ghez and Genzel, and Penrose.