Sci-Advent – Artificial Intelligence, High Performance Computing and Gravitational Waves

In a recent paper published in the ArXiV, researchers have highlighted the advantages that artificial intelligence techniques bring to the research of fields such as astrophysics. They are making their models available and that is always a great thing to see. They mention the use of these techniques to detect binary neutron stars, and to forecast the merger of multi-messenger sources, such as binary neutron stars and neutron star-black hole systems. Here are some highlights from the paper:

Finding new ways to use artificial intelligence (AI) to accelerate the analysis of gravitational wave data, and ensuring the developed models are easily reusable promises to unlock new opportunities in multi-messenger astrophysics (MMA), and to enable wider use, rigorous validation, and sharing of developed models by the community. In this work, we demonstrate how connecting recently deployed DOE and NSF-sponsored cyberinfrastructure allows for new ways to publish models, and to subsequently deploy these models into applications using computing platforms ranging from laptops to high performance computing clusters. We develop a workflow that connects the Data and Learning Hub for Science (DLHub), a repository for publishing machine learning models, with the Hardware Accelerated Learning (HAL) deep learning computing cluster, using funcX as a universal distributed computing service. We then use this workflow to search for binary black hole gravitational wave signals in open source advanced LIGO data. We find that using this workflow, an ensemble of four openly available deep learning models can be run on HAL and process the entire month of August 2017 of advanced LIGO data in just seven minutes, identifying all four binary black hole mergers previously identified in this dataset, and reporting no misclassifications. This approach, which combines advances in AI, distributed computing, and scientific data infrastructure opens new pathways to conduct reproducible, accelerated, data-driven gravitational wave detection.

Research and development of AI models for gravitational wave astrophysics is evolving at a rapid pace. In less than four years, this area of research has evolved from disruptive prototypes into sophisticated AI algorithms that describe the same 4-D signal manifold as traditional gravitational wave detection pipelines for binary black hole mergers, namely, quasi-circular, spinning, non- precessing, binary systems; have the same sensitivity as template matching algorithms; and are orders of magnitude faster, at a fraction of the computational cost.

AI models have been proven to effectively identify real gravitational wave signals in advanced LIGO data, including binary black hole and neutron stars mergers. The current pace of progress makes it clear that the broader community will continue to advance the development of AI tools to realize the science goals of Multi-Messenger Astrophysics.

Furthermore, mirroring the successful approach of corporations leading AI innovation in industry and technology, we are releasing our AI models to enable the broader community to use and perfect them. This approach is also helpful to address healthy and constructive skepticism from members of the community who do not feel at ease using AI algorithms.

Sci-Advent – Artificial intelligence improves control of powerful plasma accelerators

This is a reblog of the post by Hayley Dunning in the Imperial College website. See the original here.

Researchers have used AI to control beams for the next generation of smaller, cheaper accelerators for research, medical and industrial applications.

Electrons are ejected from the plasma accelerator at almost the speed of light, before being passed through a magnetic field which separates the particles by their energy. They are then fired at a fluorescent screen, shown here

Experiments led by Imperial College London researchers, using the Science and Technology Facilities Council’s Central Laser Facility (CLF), showed that an algorithm was able to tune the complex parameters involved in controlling the next generation of plasma-based particle accelerators.

The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerator facilities under construction within the UK and worldwide.Dr Rob Shalloo

The algorithm was able to optimize the accelerator much more quickly than a human operator, and could even outperform experiments on similar laser systems.

These accelerators focus the energy of the world’s most powerful lasers down to a spot the size of a skin cell, producing electrons and x-rays with equipment a fraction of the size of conventional accelerators.

The electrons and x-rays can be used for scientific research, such as probing the atomic structure of materials; in industrial applications, such as for producing consumer electronics and vulcanised rubber for car tyres; and could also be used in medical applications, such as cancer treatments and medical imaging.

Broadening accessibility

Several facilities using these new accelerators are in various stages of planning and construction around the world, including the CLF’s Extreme Photonics Applications Centre (EPAC) in the UK, and the new discovery could help them work at their best in the future. The results are published today in Nature Communications.

First author Dr Rob Shalloo, who completed the work at Imperial and is now at the accelerator centre DESY, said: “The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerator facilities under construction within the UK and worldwide.

“Plasma accelerator technology provides uniquely short bursts of electrons and x-rays, which are already finding uses in many areas of scientific study. With our developments, we hope to broaden accessibility to these compact accelerators, allowing scientists in other disciplines and those wishing to use these machines for applications, to benefit from the technology without being an expert in plasma accelerators.”

The outside of the vacuum chamber

First of its kind

The team worked with laser wakefield accelerators. These combine the world’s most powerful lasers with a source of plasma (ionised gas) to create concentrated beams of electrons and x-rays. Traditional accelerators need hundreds of metres to kilometres to accelerate electrons, but wakefield accelerators can manage the same acceleration within the space of millimetres, drastically reducing the size and cost of the equipment.

However, because wakefield accelerators operate in the extreme conditions created when lasers are combined with plasma, they can be difficult to control and optimise to get the best performance. In wakefield acceleration, an ultrashort laser pulse is driven into plasma, creating a wave that is used to accelerate electrons. Both the laser and plasma have several parameters that can be tweaked to control the interaction, such as the shape and intensity of the laser pulse, or the density and length of the plasma.

While a human operator can tweak these parameters, it is difficult to know how to optimise so many parameters at once. Instead, the team turned to artificial intelligence, creating a machine learning algorithm to optimise the performance of the accelerator.

The algorithm set up to six parameters controlling the laser and plasma, fired the laser, analysed the data, and re-set the parameters, performing this loop many times in succession until the optimal parameter configuration was reached.

Lead researcher Dr Matthew Streeter, who completed the work at Imperial and is now at Queen’s University Belfast, said: “Our work resulted in an autonomous plasma accelerator, the first of its kind. As well as allowing us to efficiently optimise the accelerator, it also simplifies their operation and allows us to spend more of our efforts on exploring the fundamental physics behind these extreme machines.”

Future designs and further improvements

The team demonstrated their technique using the Gemini laser systemat the CLF, and have already begun to use it in further experiments to probe the atomic structure of materials in extreme conditions and in studying antimatter and quantum physics.

The data gathered during the optimisation process also provided new insight into the dynamics of the laser-plasma interaction inside the accelerator, potentially informing future designs to further improve accelerator performance.

The experiment was led by Imperial College London researchers with a team of collaborators from the Science and Technology Facilities Council (STFC), the York Plasma Institute, the University of Michigan, the University of Oxford and the Deutsches Elektronen-Synchrotron (DESY). It was funded by the UK’s STFC, the EU Horizon 2020 research and innovation programme, the US National Science Foundation and the UK’s Engineering and Physical Sciences Research Council.

Automation and control of laser wakefield accelerators using Bayesian optimisation’ by R.J. Shalloo et al. is published in Nature Communications.

SciAdvent – Machine Learning in Ear, Nose and Throat

This is a reblog of the article by Cian Hughes and Sumit Agrawal in ENTNews. See the original here.

Figure 1. (Left) CT scan of the right temporal bone. (Middle) Structures of the temporal bone automatically segmented using a TensorFlow based deep learning algorithm. (Right) Three-dimensional model of the critical structures of the temporal bone to be used for surgical planning and simulation. 
Images courtesy of the Auditory Biophysics Laboratory, Western University, London, Canada.

Machine learning in healthcare

Over the last five years there have been significant advances in high performance computing that have led to enormous scientific breakthroughs in the field of machine learning (a form of artificial intelligence), especially with regard to image processing and data analysis. 

These breakthroughs now affect multiple aspects of our lives, from the way our phone sorts and recognises photographs, to automated translation and transcription services, and have the potential to revolutionise the practice of medicine.

The most promising form of artificial intelligence used in medical applications today is deep learning. Deep learning is a type of machine learning in which deep neural networks are trained to identify patterns in data [1]. A common form of neural network used in image processing is a convolutional neural network (CNN). Initially developed for general-purpose visual recognition, it has shown considerable promise in, for instance, the detection and classification of disease on medical imaging.

“Machine learning algorithms have also been central to the development of multiple assistive technologies that can help patients to overcome or alleviate disabilities”

Automated image segmentation has numerous clinical applications, ranging from quantitative measurement of tissue volume, through surgical planning/guidance, medical education and even cancer treatment planning. It is hoped that such advances in automated data analysis will help in the delivery of more timely care, and alleviate workforce shortages in areas such as breast cancer screening [2], where patient demand for screening already outstrips the availability of specialist breast radiologists in many parts of the world.

Applications in otolaryngology

Artificial intelligence is quickly making its way into [our] specialty. Both otolaryngologists and audiologists will soon be incorporating this technology into their clinical practices. Machine learning has been used to automatically classify auditory brainstem responses [8] and estimate audiometric thresholds [9]. This has allowed for accurate online testing [10], which could be used for rural and remote areas without access to standard audiometry (see the article by Dr Matthew Bromwich here).

Machine learning algorithms have also been central to the development of multiple assistive technologies that can help patients to overcome or alleviate disabilities. For example, in the context of hearing loss, significant advances in automated transcription apps, driven by machine learning algorithms, have proven particularly useful in recent months for patients who find themselves unable to lipread due to the use of face coverings to prevent the spread of COVID-19.

Figure 2. The virtual reality simulator CardinalSim (https://cardinalsim.stanford.edu/) depicting 
a left mastoidectomy and facial recess approach. The facial nerve (yellow) and round window 
(blue) were automatically delineated using deep learning techniques. 
Image courtesy of the Auditory Biophysics Laboratory, Western University, London, Canada

In addition to their role in general image classification, CNNs are likely to play a significant role in the introduction of machine learning in healthcare, especially in image-heavy specialties such as otolaryngology. For otologists, deep learning algorithms can already identify detailed temporal bone structures from CT images [3-6], segment intracochlear anatomy [7], and identify individual cochlear implant electrodes [8] (Figure 1); automatic analysis of critical structures on temporal bone scans have already facilitated patient-specific virtual reality otologic surgery [9] (Figure 2). Deep learning will likely also be critical in customised cochlear implant programming in the future.

“Automatic analysis of critical structures on temporal bone scans have already facilitated patient-specific virtual reality otologic surgery”

Convolutional neural networks have also been used in rhinology to automatically delineate critical anatomy and quantify sinus opacification [10-12]. Deep learning networks have been used in head and neck oncology to automatically segment anatomic structures to accelerate radiotherapy planning [13-18]. For laryngologists, voice analysis software will likely incorporate machine learning classifiers to identify pathology as it has been shown to perform better than traditional rule-based algorithms [19].

Figure 3. Automated segmentation of organs at risk of damage from radiation during radiotherapy 
for head and neck cancer. Five axial slices from the scan of a 58-year-old male patient with a cancer 
of the right tonsil selected from the Head-Neck Cetuximab trial dataset (patient 0522c0416) [20,21]. 
Adapted with permission from the original authors [13].

Conclusion

In summary, artificial intelligence and, in particular, deep learning algorithms will radically change the way we manage patients within our careers. Although developed in high-resource settings, the technology has equally significant applications in low-resource settings to facilitate quality care even in the presence of limited human resources.

“Although developed in high-resource settings, the technology has equally significant applications in low-resource settings to facilitate quality care even in the presence of limited human resources”

References

1. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell2013;35:1798-828. 
2. McKinney SM, Sieniek M, Shetty S. International evaluation of an AI system for breast cancer screening. Nature2020;577:89-94. 
3. Heutink F, Kock V, Verbist B, et al. Multi-Scale deep learning framework for cochlea localization, segmentation and analysis on clinical ultra-high-resolution CT images. Comput Methods Programs Biomed 2020;191:105387. 
4. Fauser J, Stenin I, Bauer M, et al. Toward an automatic preoperative pipeline for image-guided temporal bone surgery. Int J Comput Assist Radiol Surg 2019;14(6):967-76. 
5. Zhang D, Wang J, Noble JH, et al. Deep convolutional neural networks for accurate classification and multi-landmark localization of head CTs. Med Image Anal 2020;61:101659.
6. Nikan S, van Osch K, Bartling M, et al. PWD-3DNet: A deep learning-based fully-automated segmentation of multiple structures on temporal bone CT scans. Submitted to IEEE Trans Image Process.
7. Wang J, Noble JH, Dawant BM. Metal Artifact Reduction and Intra Cochlear Anatomy Segmentation Inct Images of the Ear With A Multi-Resolution Multi-Task 3D Network. IEEE 17th International Symposium on Biomedical Imaging (ISBI) 2020;596-9. 
8. Chi Y, Wang J, Zhao Y, et al. A Deep-Learning-Based Method for the Localization of Cochlear Implant Electrodes in CT Images. IEEE 16th International Symposium on Biomedical Imaging (ISBI) 2019;1141-5. 
9. Compton EC, et al. Assessment of a virtual reality temporal bone surgical simulator: a national face and content validity study. J Otolaryngol Head Neck Surg 2020;49:17. 
10. Laura CO, Hofmann P, Drechsler K, Wesarg S. Automatic Detection of the Nasal Cavities and Paranasal Sinuses Using Deep Neural Networks. IEEE 16th International Symposium on Biomedical Imaging (ISBI) 2019;1154-7. 
11. Iwamoto Y, Xiong K, Kitamura T, et al. Automatic Segmentation of the Paranasal Sinus from Computer Tomography Images Using a Probabilistic Atlas and a Fully Convolutional Network. Conf Proc IEEE Eng Med Biol Soc 2019;2789-92. 
12. Humphries SM, Centeno JP, Notary AM, et al. Volumetric assessment of paranasal sinus opacification on computed tomography can be automated using a convolutional neural network. Int Forum Allergy Rhinol 2020. 
13. Nikolov S, Blackwell S, Mendes R, et al. Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy. arXiv [cs.CV] 2018. 
14. Tong N, Gou S, Yang, S, et al. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys 2018;45;4558-67. 
15. Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks.Med Phys 2017;44:547-57. 
16. Vrtovec T, Močnik D, Strojan P, et al. B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: from atlas-based to deep learning methods. Med Phys 2020.
17. Zhu W, Huang Y, Zeng L. et al. AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med Phys 2019;46(2):576-89. 
18. Tong N, Gou S, Yang S, et al. Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images. Med Phys 2019;46:2669-82. 
19. Cesari U, De Pietro G, Marciano E, et al. Voice Disorder Detection via an m-Health System: Design and Results of a Clinical Study to Evaluate Vox4Health. Biomed Res Int 2018;8193694. 
20. Bosch WR, Straube WL, Matthews JW, Purdy JA. Data From Head-Neck_Cetuximab 2015. 
21. Clark K, Vendt B, Smith K, et al. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 2013;26:1045-5.

Sci-Advent – Artificial intelligence helps scientists develop new general models in ecology

The automation of scientific discoveries is here to stay. Among others, a machine-human cooperation found a hitherto unknown general model explaining the relation between the area and age of an island and the number of species it hosts.

In ecology, millions of species interact in billions of different ways between them and with their environment. Ecosystems often seem chaotic, or at least overwhelming for someone trying to understand them and make predictions for the future.

Artificial intelligence and machine learning are able to detect patterns and predict outcomes in ways that often resemble human reasoning. They pave the way to increasingly powerful cooperation between humans and computers.

Within AI, evolutionary computation methods replicate in some sense the processes of evolution of species in the natural world. A particular method called symbolic regression allows the evolution of human-interpretable formulas that explain natural laws.

“We used symbolic regression to demonstrate that computers are able to derive formulas that represent the way ecosystems or species behave in space and time. These formulas are also easy to understand. They pave the way for general rules in ecology, something that most methods in AI cannot do,” says Pedro Cardoso, curator at the Finnish Museum of Natural History, University of Helsinki.

With the help of the symbolic regression method, an interdisciplinary team from Finland, Portugal, and France was able to explain why some species exist in some regions and not in others, and why some regions have more species than others.

The researchers were able, for example, to find a new general model that explains why some islands have more species than others. Oceanic islands have a natural life-cycle, emerging from volcanoes and eventually submerging with erosion after millions of years. With no human input, the algorithm was able to find that the number of species of an island increases with the island age and peaks with intermediate ages, when erosion is still low.

“The explanation was known, a couple of formulas already existed, but we were able to find new ones that outperform the existing ones under certain circumstances,” says Vasco Branco, PhD student working on the automation of extinction risk assessments at the University of Helsinki.

The research proposes that explainable artificial intelligence is a field to explore and promotes the cooperation between humans and machines in ways that are only now starting to scratch the surface.

“Evolving free-form equations purely from data, often without prior human inference or hypotheses, may represent a very powerful tool in the arsenal of a discipline as complex as ecology,” says Luis Correia, computer science professor at the University of Lisbon.

Automated Discovery of Relationships, Models, and Principles in EcologyFrontiers in Ecology and Evolution, 2020; 8 DOI: 10.3389/fevo.2020.530135

Sci-Advent – Significant step toward quantum advantage

Optimised quantum algorithms present solution to Fermi-Hubbard model on near-term hardware

This a reblog of an article in Science Daily. See the original here.

The team, led by Bristol researcher and Phasecraft co-founder, Dr. Ashley Montanaro, has discovered algorithms and analysis which significantly lessen the quantum hardware capability needed to solve problems which go beyond the realm of classical computing, even supercomputers.

In the paper, published in Physical Review B, the team demonstrates how optimised quantum algorithms can solve the notorious Fermi-Hubbard model on near-term hardware.

The Fermi-Hubbard model is of fundamental importance in condensed-matter physics as a model for strongly correlated materials and a route to understanding high-temperature superconductivity.

Finding the ground state of the Fermi-Hubbard model has been predicted to be one of the first applications of near-term quantum computers, and one that offers a pathway to understanding and developing novel materials.

Dr. Ashley Montanaro, research lead and cofounder of Phasecraft: “Quantum computing has critically important applications in materials science and other domains. Despite the major quantum hardware advances recently, we may still be several years from having the right software and hardware to solve meaningful problems with quantum computing. Our research focuses on algorithms and software optimisations to maximise the quantum hardware’s capacity, and bring quantum computing closer to reality.

“Near-term quantum hardware will have limited device and computation size. Phasecraft applied new theoretical ideas and numerical experiments to put together a very comprehensive study on different strategies for solving the Fermi-Hubbard model, zeroing in on strategies that are most likely to have the best results and impact in the near future.

“The results suggest that optimising over quantum circuits with a gate depth substantially less than a thousand could be sufficient to solve instances of the Fermi-Hubbard model beyond the capacity of a supercomputer. This new research shows significant promise for the capabilities of near-term quantum devices, improving on previous research findings by around a factor of 10.”

Physical Review B, published by the American Physical Society, is the top specialist journal in condensed-matter physics. The peer-reviewed research paper was also chosen as the Editors’ Suggestion and to appear in Physics magazine.

Andrew Childs, Professor in the Department of Computer Science and Institute for Advanced Computer Studies at the University of Maryland: “The Fermi-Hubbard model is a major challenge in condensed-matter physics, and the Phasecraft team has made impressive steps in showing how quantum computers could solve it. Their work suggests that surprisingly low-depth circuits could provide useful information about this model, making it more accessible to realistic quantum hardware.”

Hartmut Neven, Head of Quantum Artificial Intelligence Lab, Google: “Sooner or later, quantum computing is coming. Developing the algorithms and technology to power the first commercial applications of early quantum computing hardware is the toughest challenge facing the field, which few are willing to take on. We are proud to be partners with Phasecraft, a team that are developing advances in quantum software that could shorten that timeframe by years.”

Phasecraft Founder Dr. Toby Cubitt: “At Phasecraft, our team of leading quantum theorists have been researching and applying quantum theory for decades, leading some of the top global academic teams and research in the field. Today, Ashley and his team have demonstrated ways to get closer to achieving new possibilities that exist just beyond today’s technological bounds.”

Phasecraft has closed a record seed round for a quantum company in the UK with £3.7m in funding from private-sector VC investors, led by LocalGlobe with Episode1 along with previous investors. Former Songkick founder Ian Hogarth has also joined as board chair for Phasecraft. Phasecraft previously raised a £750,000 pre-seed round led by UCL Technology Fund with Parkwalk Advisors and London Co-investment Fund and has earned several grants facilitated by InnovateUK. Between equity funding and research grants, Phasecraft has raised more than £5.5m.

Dr Toby Cubitt: “With new funding and support, we are able to continue our pioneering research and industry collaborations to develop the quantum computing industry and find useful applications faster.”

Sci-Advent – Aztec skull tower: Archaeologists unearth new sections in Mexico City

This is a reblog of the article by in the BBC. See the original here.

Archaeologists have excavated more sections of an extraordinary Aztec tower of human skulls under the centre of Mexico City.

Mexico’s National Institute of Anthropology and History (INAH) said a further 119 skulls had been uncovered. The tower was discovered in 2015 during the restoration of a building in the Mexican capital.

It is believed to be part of a skull rack from the temple to the Aztec god of the sun, war and human sacrifice. Known as the Huey Tzompantli, the skull rack stood on the corner of the chapel of Huitzilopochtli, the patron of the Aztec capital Tenochtitlan.

The Aztecs were a group of Nahuatl-speaking peoples that dominated large parts of central Mexico from the 14th to the 16th centuries. Their empire was overthrown by invaders led by the Spanish conquistador Hernán Cortés, who captured Tenochtitlan in 1521.

A similar structure to the Huey Tzompantli struck fear in the soldiers accompanying the Spanish conqueror when they invaded the city. The cylindrical structure is near the huge Metropolitan Cathedral built over the Templo Mayor, one of the main temples of Tenochtitlan, now modern day Mexico City.

“The Templo Mayor continues to surprise us, and the Huey Tzompantli is without doubt one of the most impressive archaeological finds of recent years in our country,” Mexican Culture Minister Alejandra Frausto said.

Archaeologists have identified three construction phases of the tower, which dates back to between 1486 and 1502. The tower’s original discovery surprised anthropologists, who had been expecting to find the skulls of young male warriors, but also unearthed the crania of women and children, raising questions about human sacrifice in the Aztec Empire.

“Although we can’t say how many of these individuals were warriors, perhaps some were captives destined for sacrificial ceremonies,” said archaeologist Raul Barrera.

“We do know that they were all made sacred,” he added. “Turned into gifts for the gods or even personifications of deities themselves.”

Sci-Advent – Getting the right grip: Designing soft and sensitive robotic fingers

person holding black and silver hand tool
Photo by C Technical on Pexels.com

To develop a more human-like robotic gripper, it is necessary to provide sensing capabilities to the fingers. However, conventional sensors compromise the mechanical properties of soft robots. Now, scientists have designed a 3D printable soft robotic finger containing a built-in sensor with adjustable stiffness. Their work represents a big step toward safer and more dexterous robotic handling, which will extend the applications of robots to fields such as health and elderly care.

Although robotics has reshaped and even redefined many industrial sectors, there still exists a gap between machines and humans in fields such as health and elderly care. For robots to safely manipulate or interact with fragile objects and living organisms, new strategies to enhance their perception while making their parts softer are needed. In fact, building a safe and dexterous robotic gripper with human-like capabilities is currently one of the most important goals in robotics.

One of the main challenges in the design of soft robotic grippers is integrating traditional sensors onto the robot’s fingers. Ideally, a soft gripper should have what’s known as proprioception — a sense of its own movements and position — to be able to safely execute varied tasks. However, traditional sensors are rigid and compromise the mechanical characteristics of the soft parts. Moreover, existing soft grippers are usually designed with a single type of proprioceptive sensation; either pressure or finger curvature.

To overcome these limitations, scientists at Ritsumeikan University, Japan, have been working on novel soft gripper designs under the lead of Associate Professor Mengying Xie. In their latest study published in Nano Energy, they successfully used multimaterial 3D printing technology to fabricate soft robotic fingers with a built-in proprioception sensor. Their design strategy offers numerous advantages and represents a large step toward safer and more capable soft robots.

The soft finger has a reinforced inflation chamber that makes it bend in a highly controllable way according to the input air pressure. In addition, the stiffness of the finger is also tunable by creating a vacuum in a separate chamber. This was achieved through a mechanism called vacuum jamming, by which multiple stacked layers of a bendable material can be made rigid by sucking out the air between them. Both functions combined enable a three-finger robotic gripper to properly grasp and maintain hold of any object by ensuring the necessary force is applied.

Most notable, however, is that a single piezoelectric layer was included among the vacuum jamming layers as a sensor. The piezoelectric effect produces a voltage difference when the material is under pressure. The scientists leveraged this phenomenon as a sensing mechanism for the robotic finger, providing a simple way to sense both its curvature and initial stiffness (prior to vacuum adjustment). They further enhanced the finger’s sensitivity by including a microstructured layer among the jamming layers to improve the distribution of pressure on the piezoelectric material.

The use of multimaterial 3D printing, a simple and fast prototyping process, allowed the researchers to easily integrate the sensing and stiffness-tuning mechanisms into the design of the robotic finger itself. “Our work suggests a way of designing sensors that contribute not only as sensing elements for robotic applications, but also as active functional materials to provide better control of the whole system without compromising its dynamic behavior,” says Prof Xie. Another remarkable feature of their design is that the sensor is self-powered by the piezoelectric effect, meaning that it requires no energy supply — essential for low-power applications.

Overall, this exciting new study will help future researchers find new ways of improving how soft grippers interact with and sense the objects being manipulated. In turn, this will greatly expand the uses of robots, as Prof Xie indicates: “Self-powered built-in sensors will not only allow robots to safely interact with humans and their environment, but also eliminate the barriers to robotic applications that currently rely on powered sensors to monitor conditions.”

Let’s hope this technology is further developed so that our mechanical friends can soon join us in many more human activities!

Flexible self-powered multifunctional sensor for stiffness-tunable soft robotic gripper by multimaterial 3D printingNano Energy, 2021; 79: 105438 DOI: 10.1016/j.nanoen.2020.105438

Sci-Advent – ‘Electronic amoeba’ finds approximate solution to traveling salesman problem in linear time

Researchers at Hokkaido University and Amoeba Energy in Japan have, inspired by the efficient foraging behavior of a single-celled amoeba, developed an analog computer for finding a reliable and swift solution to the traveling salesman problem — a representative combinatorial optimization problem.

Amoeba-inspired analog electronic computing system integrating resistance crossbar for solving the travelling salesman problem. Scientific Reports, 2020; 10 (1) DOI: 10.1038/s41598-020-77617-7

Many real-world application tasks such as planning and scheduling in logistics and automation are mathematically formulated as combinatorial optimization problems. Conventional digital computers, including supercomputers, are inadequate to solve these complex problems in practically permissible time as the number of candidate solutions they need to evaluate increases exponentially with the problem size — also known as combinatorial explosion. Thus new computers called “Ising machines,” including “quantum annealers,” have been actively developed in recent years. These machines, however, require complicated pre-processing to convert each task to the form they can handle and have a risk of presenting illegal solutions that do not meet some constraints and requests, resulting in major obstacles to the practical applications.

These obstacles can be avoided using the newly developed “electronic amoeba,” an analog computer inspired by a single-celled amoeboid organism. The amoeba is known to maximize nutrient acquisition efficiently by deforming its body. It has shown to find an approximate solution to the traveling salesman problem (TSP), i.e., given a map of a certain number of cities, the problem is to find the shortest route for visiting each city exactly once and returning to the starting city. This finding inspired Professor Seiya Kasai at Hokkaido University to mimic the dynamics of the amoeba electronically using an analog circuit, as described in the journal Scientific Reports. “The amoeba core searches for a solution under the electronic environment where resistance values at intersections of crossbars represent constraints and requests of the TSP,” says Kasai. Using the crossbars, the city layout can be easily altered by updating the resistance values without complicated pre-processing.

Kenta Saito, a PhD student in Kasai’s lab, fabricated the circuit on a breadboard and succeeded in finding the shortest route for the 4-city TSP. He evaluated the performance for larger-sized problems using a circuit simulator. Then the circuit reliably found a high-quality legal solution with a significantly shorter route length than the average length obtained by the random sampling. Moreover, the time required to find a high-quality legal solution grew only linearly to the numbers of cities. Comparing the search time with a representative TSP algorithm “2-opt,” the electronic amoeba becomes more advantageous as the number of cities increases. “The analog circuit reproduces well the unique and efficient optimization capability of the amoeba, which the organism has acquired through natural selection,” says Kasai.

“As the analog computer consists of a simple and compact circuit, it can tackle many real-world problems in which inputs, constraints, and requests dynamically change and can be embedded into IoT devices as a power-saving microchip,” says Masashi Aono who leads Amoeba Energy to promote the practical use of the amoeba-inspired computers.

This is a Joint Release between Hokkaido University and Amoeba Energy Co., Ltd. More information

Sci-Advent – New superhighway system discovered in the Solar System

Researchers have discovered a new superhighway network to travel through the Solar System much faster than was previously possible. Such routes can drive comets and asteroids near Jupiter to Neptune’s distance in under a decade and to 100 astronomical units in less than a century. They could be used to send spacecraft to the far reaches of our planetary system relatively fast, and to monitor and understand near-Earth objects that might collide with our planet.

The arches of chaos in the Solar System. Science Advances, 2020; 6 (48): eabd1313 DOI: 10.1126/sciadv.abd1313

In their paper, published in the Nov. 25 issue of Science Advances, the researchers observed the dynamical structure of these routes, forming a connected series of arches inside what’s known as space manifolds that extend from the asteroid belt to Uranus and beyond. This newly discovered “celestial autobahn” or “celestial highway” acts over several decades, as opposed to the hundreds of thousands or millions of years that usually characterize Solar System dynamics.

The most conspicuous arch structures are linked to Jupiter and the strong gravitational forces it exerts. The population of Jupiter-family comets (comets having orbital periods of 20 years) as well as small-size solar system bodies known as Centaurs, are controlled by such manifolds on unprecedented time scales. Some of these bodies will end up colliding with Jupiter or being ejected from the Solar System.

The structures were resolved by gathering numerical data about millions of orbits in our Solar System and computing how these orbits fit within already-known space manifolds. The results need to be studied further, both to determine how they could be used by spacecraft, or how such manifolds behave in the vicinity of the Earth, controlling the asteroid and meteorite encounters, as well as the growing population of artificial human-made objects in the Earth-Moon system.

Sci-Advent – Trends in prevalence of blindness and distance and near vision impairment over 30 years

Keeping up with the Sci-advent post from yesterday about vision and optics, this report from the University of Michigan is relevant news. Researchers say eye care accessibility around the globe isn’t keeping up with an aging population, posing challenges for eye care professionals over the next 30 years.

As the global population grows and ages, so does their need for eye care. But according to two new studies published in The Lancet Global Health, these needs aren’t being met relative to international targets to reduce avoidable vision loss.

As 2020 comes to a close, an international group of researchers set out to provide updated estimates on the number of people that are blind or visually impaired across the globe, to identify the predominant causes, and to illustrate epidemiological trends over the last 30 years.

“This is important because when we think about setting a public health agenda, knowing the prevalence of an impairment, what causes it, and where in the world it’s most common informs the actions that key decision makers like the WHO and ministries of health take to allocate limited resources,” says Joshua Ehrlich, M.D., M.P.H., a study author and ophthalmologist at Kellogg Eye Center.

The study team assesses a collection of secondary data every five years, undertaking a meta-analysis of population-based surveys of eye disease assembled by the Vision Loss Expert Group and spanning from 1980 to 2018.

Creating a blueprint

A study like this poses challenges since regional populations vary in age.

“For example, the population in some Asian and European countries is much older on average than the population in many African nations. Many populations are also growing older over time. A direct comparison of the percentage of the population with blindness or vision impairment wouldn’t paint a complete picture” says Ehrlich, who is also a member of University of Michigan’s Institute for Healthcare Policy and Innovation, explains.

To address this issue, the study looked at age-standardized prevalence, accomplished by adjusting regional populations to fit a standard age structure.

“We found that the age-standardized prevalence is decreasing around the world, which tells us eye care systems and quality of care are getting better,” says study author Monte A. Del Monte, M.D., a pediatric ophthalmologist at Kellogg Eye Center. “However, as populations age, a larger number of people are being affected by serious vision impairment, suggesting we need to improve accessibility to care and further develop human resources to provide care.”

In fact, the researchers found that there wasn’t any significant reduction in the number of people with treatable vision loss in the last ten years, which paled in comparison to the World Health Assembly Global Action Plan target of a 25% global reduction of avoidable vision loss in this same time frame.

Although findings varied by region globally, cataracts and the unmet need for glasses were the most prevalent causes of moderate to severe vision impairment. Approximately 45% of the 33.6 million cases of global blindness were caused by cataracts, which can be treated with surgery.

Refractive error, which causes a blurred image resulting from an abnormal shape of the cornea and lens not bending light correctly, accounted for vision loss in 86 million people across the globe. This largest contributor to moderate or severely impaired vision can be easily treated with glasses.

Also important, vision impairment due to diabetic retinopathy, a complication of diabetes that affects eyesight, was found to have increased in global prevalence.

“This is another condition in which we can prevent vision loss with early screenings and intervention,” says study author Alan L. Robin, M.D., a collaborating ophthalmologist at Kellogg Eye Center and professor at Johns Hopkins Medicine. “As diabetes becomes more common across the globe, this condition may begin to affect younger populations, as well.”

Looking to 2050

“Working as a global eye care community, we need to now look at the next 30 years,” Ehrlich says. “We hope to take these findings and create implementable strategies with our global partners through our Kellogg Eye Center for International Ophthalmology so fewer people go blind unnecessarily.”

In an effort to contribute to the WHO initiative VISION 2020: The Right to Sight, the researchers updated estimates of the global burden of vision loss and provided predictions for what the year 2050 may look like.

They found that the majority of the 43.9 million people blind globally are women. Women also make up the majority of the 295 million people who have moderate to severe vision loss, the 163 million who have mild vision loss and the 510 million who have visual impairments related to the unmet need for glasses, specifically poor near vision.

By 2050, Ehrlich, Del Monte, and Robin predict 61 million people will be blind, 474 million will have moderate and severe vision loss, 360 million will have mild vision loss and 866 million will have visual impairments related to farsightedness.

“Eliminating preventable blindness globally isn’t keeping pace with the global population’s needs,” Ehrlich says. “We face enormous challenges in treating and preventing vision impairment as the global population grows and ages, but I’m optimistic of a future where we will succeed because of the measures we take now to make a difference.”

Both studies were funded by Brien Holden Vision Institute, Fondation Théa, Fred Hollows Foundation, Bill & Melinda Gates Foundation, Lions Clubs International Foundation, Sightsavers International and the University of Heidelberg.

GBD 2019 Blindness and Vision Impairment Collaborators, on behalf of theVision Loss Expert Group of the Global Burden of Disease Study. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: the Right to Sight: an analysis for the Global Burden of Disease Study. The Lancet Global Health, 2020; DOI: 10.1016/S2214-109X(20)30489-7