Health Science blogs

Science Blogs

28 September 2021

Science Blogs
  • Ancient disaster destroyed Biblical city Tall el-Hammam
    28 September 2021

    In the Middle Bronze Age (about 3600 years ago or roughly 1650 BCE), the city of Tall el-Hammam was ascendant. Located on high ground in the southern Jordan Valley, northeast of the Dead Sea, the settlement in its time had become the largest continuously occupied Bronze Age city in the southern Levant, having hosted early civilization for a few thousand years. At that time, it was 10 times larger than Jerusalem and 5 times larger than Jericho.

    “It’s an incredibly culturally important area,” said James Kennett, emeritus professor of earth science at the UC Santa Barbara. “Much of where the early cultural complexity of humans developed is in this general area.”

    A favorite site for archaeologists and biblical scholars, the mound hosts evidence of culture all the way from the Chalcolithic, or Copper Age, all compacted into layers as the highly strategic settlement was built, destroyed, and rebuilt over millennia.

    But there is a 1.5-meter interval in the Middle Bronze Age II stratum that caught the interest of some researchers, for its “highly unusual” materials. In addition to the debris one would expect from destruction via warfare and earthquakes, they found pottery shards with outer surfaces melted into glass, “bubbled” mudbrick, and partially melted building material, all indications of an anomalously high-temperature event, much hotter than anything the technology of the time could produce.

    “We saw evidence for temperatures greater than 2,000 degrees Celsius,” said Kennett, whose research group at the time happened to have been building the case for an older cosmic airburst about 12,800 years ago that triggered major widespread burning, climatic changes and animal extinctions. The charred and melted materials at Tall el-Hammam looked familiar, and a group of researchers including impact scientist Allen West and Kennett joined Trinity Southwest University biblical scholar Philip J. Silvia’s research effort to determine what happened at this city 3,650 years ago.

    Their results are published in the journal Nature Scientific Reports.

    Salt and Bone
    “There’s evidence of a large cosmic airburst, close to this city called Tall el-Hammam,” Kennett said, of an explosion similar to the Tunguska Event, a roughly 12-megaton airburst that occurred in 1908, when a 56-60-meter meteor pierced the Earth’s atmosphere over the Eastern Siberian Taiga.

    The shock of the explosion over Tall el-Hammam was enough to level the city, flattening the palace and surrounding walls and mudbrick structures, according to the paper, and the distribution of bones indicated “extreme disarticulation and skeletal fragmentation in nearby humans.”

    For Kennett, further proof of the airburst was found by conducting many different kinds of analyses on soil and sediments from the critical layer. Tiny iron- and silica-rich spherules turned up in their analysis, as did melted metals.

    “I think one of the main discoveries is shocked quartz. These are sand grains containing cracks that form only under very high pressure” Kennett said of one of many lines of evidence that point to a large airburst near Tall el-Hammam. “We have shocked quartz from this layer, and that means there were incredible pressures involved to shock the quartz crystals— quartz is one of the hardest minerals; it’s very hard to shock.”

    The airburst, according to the paper, may also explain the “anomalously high concentrations of salt” found in the destruction layer — an average of 4% in the sediment and as high as 25% in some samples.

    “The salt was thrown up due to the high impact pressures,” Kennett said, of the meteor that likely fragmented upon contact with the Earth’s atmosphere. “And it may be that the impact partially hit the Dead Sea, which is rich in salt.” The local shores of the Dead Sea are also salt-rich so the impact may have redistributed those salt crystals far and wide — not just at Tall el-Hammam, but also nearby Tell es-Sultan (proposed as the biblical Jericho, which also underwent violent destruction at the same time) and Tall-Nimrin (also then destroyed).

    The high-salinity soil could have been responsible for the so-called “Late Bronze Age Gap,” the researchers say, in which cities along the lower Jordan Valley were abandoned, dropping the population from tens of thousands to maybe a few hundred nomads. Nothing could grow in these formerly fertile grounds, forcing people to leave the area for centuries. Evidence for resettlement of Tall el-Hammam and nearby communities appears again in the Iron Age, roughly 600 years after the cities’ sudden devastation in the Bronze Age.

    Fire and Brimstone
    Tall el-Hamman has been the focus of an ongoing debate as to whether it could be the biblical city of Sodom, one of the two cities in the Old Testament Book of Genesis that were destroyed by God for how wicked they and their inhabitants had become. One denizen, Lot, is saved by two angels who instruct him not to look behind as they flee. Lot’s wife, however, lingers and is turned into a pillar of salt. Meanwhile, fire and brimstone fell from the sky; multiple cities were destroyed; thick smoke rose from the fires; city inhabitants were killed and area crops were destroyed in what sounds like an eyewitness account of a cosmic impact event. It’s a satisfying connection to make.

    “All the observations stated in Genesis are consistent with a cosmic airburst,” Kennett said, “but there’s no scientific proof that this destroyed city is indeed the Sodom of the Old Testament.” However, the researchers said, the disaster could have generated an oral tradition that may have served as the inspiration for the written account in the book of Genesis, as well as the biblical account of the burning of Jericho in the Old Testament Book of Joshua.

  • Happiness in Early Adulthood May Protect Against Dementia
    28 September 2021

    While research has shown that poor cardiovascular health can damage blood flow to the brain increasing the risk for dementia, a new study led by UC San Francisco indicates that poor mental health may also take its toll on cognition.

    The research adds to a body of evidence that links depression with dementia, but while most studies have pointed to its association in later life, the UCSF study shows that depression in early adulthood may lead to lower cognition 10 years later and to cognitive decline in old age.

    The study publishes in the Journal of Alzheimer’s Disease on Sept. 28, 2021.

    The researchers used innovative statistical methods to predict average trajectories of depressive symptoms for approximately 15,000 participants ages 20 to 89, divided into three life stages: older, midlife and young adulthood. They then applied these predicted trajectories and found that in a group of approximately 6,000 older participants, the odds of cognitive impairment were 73 percent higher for those estimated to have elevated depressive symptoms in early adulthood, and 43 percent higher for those estimated to have elevated depressive symptoms in later life.

    These results were adjusted for depressive symptoms in other life stages and for differences in age, sex, race, educational attainment, body mass index, history of diabetes and smoking status. For depressive symptoms in midlife, the researchers found an association with cognitive impairment, but this was discounted when they adjusted for depression in other life stages.

    Excess Stress Hormones May Damage Ability to Make New Memories

    “Several mechanisms explain how depression might increase dementia risk,” said first author Willa Brenowitz, PhD, MPH, of the UCSF Department of Psychiatry and Behavioral Sciences and the UCSF Weill Institute for Neurosciences. “Among them is that hyperactivity of the central stress response system increases production of the stress hormones glucocorticoids, leading to damage of the hippocampus, the part of the brain essential for forming, organizing and storing new memories.”

    Other studies have linked depression with atrophy of the hippocampus, and one study has shown faster rates of volume loss in women, she said.

    In estimating the depressive symptoms across each life stage, researchers pooled data from younger participants with data from the approximately 6,000 older participants and predicted average trajectories. These participants, whose average age was 72 at the start of the study and lived at home, had been enrolled by the Health Aging and Body Composition Study and the Cardiovascular Health Study. They were followed annually or semi-annually for up to 11 years.

    U-Shaped Curve Adds Credence to Predicted Trajectories

    While assumed values were used, the authors stated, no longitudinal studies have been completed across the life course. “Imputed depressive symptom trajectories fit a U-shaped curve, similar to age-related trends in other research,” they noted.

    Participants were screened for depression using a tool called the CESD-10, a 10-item questionnaire assessing symptoms in the past week. Moderate or high depressive symptoms were found in 13 percent of young adults, 26 percent of midlife adults and 34 percent of older participants.

    Some 1,277 participants were diagnosed with cognitive impairment following neuropsychological testing, evidence of global decline, documented use of a dementia medication or hospitalization with dementia as a primary or secondary diagnosis.

    “Generally, we found that the greater the depressive symptoms, the lower the cognition and the faster the rates of decline,” said Brenowitz, who is also affiliated with the UCSF Department of Epidemiology and Biostatistics. “Older adults estimated to have moderate or high depressive symptoms in early adulthood were found to experience a drop in cognition over 10 years.”

    With up to 20 percent of the population suffering from depression during their lifetime, it’s important to recognize its role in cognitive aging, said senior author Kristine Yaffe, MD, of the UCSF departments of Psychiatry and Behavioral Sciences, and Epidemiology and Biostatistics. “Future work will be needed to confirm these findings, but in the meantime, we should screen and treat depression for many reasons.”

    Co-Authors: Eric Vittinghoff, PhD, from UCSF; Adina Zeki Al Hazzouri, PhD, from Columbia University; Sherita H. Golden, MD, from Johns Hopkins University School of Medicine; and Annette L. Fitzpatrick, PhD, from University of Washington.

    Funding: National Institutes of Health and National Institute on Aging (1RF1AG054443).

  • Humans may have hatched and raised deadly cassowary chicks
    28 September 2021

    As early as 18,000 years ago, humans in New Guinea may have collected cassowary eggs near maturity and then raised the birds to adulthood, according to an international team of scientists, who used eggshells to determine the developmental stage of the ancient embryos/chicks when the eggs cracked.

    “This behavior that we are seeing is coming thousands of years before domestication of the chicken,” said Kristina Douglass, assistant professor of anthropology and African studies, Penn State. “And this is not some small fowl, it is a huge, ornery, flightless bird that can eviscerate you. Most likely the dwarf variety that weighs 20 kilos (44 pounds).”

    The researchers report today (Sept. 27) in the Proceedings of the National Academy of Sciences that “the data presented here may represent the earliest indication of human management of the breeding of an avian taxon anywhere in the world, preceding the early domestication of chicken and geese by several millennia.”

    Cassowaries are not chickens; in fact, they bear more resemblance to velociraptors than most domesticated birds. “However, cassowary chicks imprint readily to humans and are easy to maintain and raise up to adult size,” the researchers report. Imprinting occurs when a newly hatched bird decides that the first thing it sees is its mother. If that first glance happens to catch sight of a human, the bird will follow the human anywhere.

    According to the researchers, cassowary chicks are still traded as a commodity in New Guinea.

    Importance of eggshells

    Eggshells are part of the assemblage of many archeological sites, but according to Douglass, archaeologists do not often study them. The researchers developed a new method to determine how old a chick embryo was when an egg was harvested. They reported this work in a recent issue of the Journal of Archaeological Science.

    “I’ve worked on eggshells from archaeological sites for many years,” said Douglass. “I discovered research on turkey eggshells that showed changes in the eggshells over the course of development that were an indication of age. I decided this would be a useful approach.”

    The age assignment of the embryos/chicks depends on the 3-dimensional features of the inside of the shell. To develop the method needed to determine the eggs’ developmental age when the shells broke, the researchers used ostrich eggs from a study done to improve ostrich reproduction.  Researchers at the Oudtshoorn Research Farm, part of the Western Cape Government of South Africa, harvested three eggs every day of incubation for 42 days for their study and supplied Douglass and her team with samples from 126 ostrich eggs.

    They took four samples from each of these eggs for a total of 504 shell samples, each having a specific age. They created high-resolution, 3D images of the shell samples. By inspecting the inside of these eggs, the researcher created a statistical assessment of what the eggs looked like during stages of incubation. The researchers then tested their model with modern ostrich and emu eggs of known age.

    The insides of the eggshells change through development because the developing chicks get calcium from the eggshell. Pits begin to appear in the middle of development.

    “It is time dependent, but a little more complicated,” said Douglass. “We used a combination of 3D imaging, modeling and morphological descriptions.”

    The researchers then turned to legacy shell collections from two sites in New Guinea — Yuku and Kiowa. They applied their approach to more than 1,000 fragments of these 18,000- to 6,000-year-old eggs.

    “What we found was that a large majority of the eggshells were harvested during late stages,” said Douglass. “The eggshells look very late; the pattern is not random. They were either into eating baluts or they are hatching chicks.”

    A balut is a nearly developed embryo chick usually boiled and eaten as street food in parts of Asia.

    The original archaeologists found no indication of penning for the cassowaries. The few cassowary bones found at sites are only those of the meaty portions — leg and thigh — suggesting these were hunted birds, processed in the wild and only the meatiest parts got hauled home.

    “We also looked at burning on the eggshells,” said Douglass. “There are enough samples of late stage eggshells that do not show burning that we can say they were hatching and not eating them.”

    To successfully hatch and raise cassowary chicks, the people would need to know where the nests were, know when the eggs were laid and remove them from the nest just before hatching. Back in the late Pleistocene, according to Douglass, humans were purposefully collecting these eggs and this study suggests people were not just harvesting eggs to eat the contents.

    Also working on this project from Penn State were Priyangi Bulathsinhala, assistant teaching professor of statistics; Tim Tighe, assistant research professor, Materials Research Institute; and Andrew L. Mack, grants and contract coordinator, Penn State Altoona.

    Others working on the project include Dylan Gaffney, graduate student, University of Cambridge, U.K.; Theresa J. Feo, senior science officer, California Council of Science and Technology; and Megan Spitzer, research assistant; Scott Whittaker, manager, scientific imaging; Helen James, research zoologist and curator of birds; and Torben Rick, curator of North American Archaeology, all at the Natural Museum of Natural History, Smithsonian Institution. Glenn R. Summerhayes, professor of archaeology, University of Otago, New Zealand; and Zanell Brand, production scientist, Oudtshoorn Research Farm, Elsenburg, Department of Agriculture, Western Cape Government, South Africa, also worked on the project.

    The Smithsonian National Museum of Natural History, the National Science Foundation and Penn State’s College of the Liberal Arts supported this work.

  • A new approach to the data-deletion conundrum
    28 September 2021

    Rising consumer concern over data privacy has led to a rush of “right to be forgotten” laws around the world that allow individuals to request their personal data be expunged from massive databases that catalog our increasingly online lives. Researchers in artificial intelligence have observed that user data does not only exist in its raw form in a database, it is also implicitly contained in models trained on that data. So far, they have struggled to find methods for deleting these “traces” of users efficiently. The more complex the model is, the more challenging it becomes to delete data.

    “The exact deletion of data — the ideal — is hard to do in real time,” says James Zou, a professor of biomedical data science at Stanford University and an expert in artificial intelligence. “In training our machine learning models, bits and pieces of data can get embedded in the model in complicated ways. That makes it hard for us to guarantee a user has truly been forgotten without altering our models substantially.”

    Zou is senior author of a paper recently presented at the International Conference on Artificial Intelligence and Statistics (AISTATS) that may provide a possible answer to the data deletion problem that works for privacy-concerned individuals and artificial intelligence experts alike. They call it approximate deletion.

    Read the study: Approximate Data Deletion from Machine Learning Models

    “Approximate deletion, as the name suggests, allows us to remove most of the users’ implicit data from the model. They are ‘forgotten,’ but in such a way that we can do the retraining of our models at a later, more opportune time,” says Zach Izzo, a graduate student in mathematics and the first author of the AISTATS paper.

    Approximate deletion is especially useful in quickly removing sensitive information or features unique to a given individual that could potentially be used for identification after the fact, while postponing the computationally intensive full model retraining to times of lower computational demand. Under certain assumptions, Zou says, approximate deletion even achieves the holy grail of exact deletion of a user’s implicit data from the trained model.

    Driven by Data

    Machine learning works by combing databases and applying various predictive weights to features in the data — an online shopper’s age, location, and previous purchase history, for instance, or a streamer’s past viewing history and personal ratings of movies  watched. The models are not confined to commercial applications and are now widely used in radiology, pathology, and other fields of direct human impact.

    In theory, information in a database is anonymized, but users concerned about privacy fear that they can still be identified by the bits and pieces of information about them that are still wedged in the models, begetting the need for right to be forgotten laws.

    The gold standard in the field, Izzo says, is to find the exact same model as if the machine learning had never seen the deleted data points in the first place. That standard, known as “exact deletion,” is hard if not impossible to achieve, especially with large, complicated models like those that recommend products or movies to online shoppers and streamers. Exact data deletion effectively means retraining a model from scratch, Izzo says.

    “Doing that requires taking the algorithm offline for retraining. And that costs real money and real time,” he says.

    What is Approximate Deletion?

    In solving the deletion conundrum, Zou and Izzo have come at things slightly differently than their counterparts in the field. In effect, they create synthetic data to replace — or, more accurately, negate — that of the individual who wishes to be forgotten.

    This temporary solution satisfies the privacy-minded individual’s immediate desire to not be identified from data in the model — that is, to be forgotten — while reassuring the computer scientists, and the businesses that rely upon them, that their models will work as planned, at least until a more opportune time when the model can be retrained at lower cost.

    There is a philosophical aspect to the challenge, the authors say. Where privacy, law, and commerce intersect, the discussion begins with a meaningful definition of what it means to “delete” information. Does deletion mean the actual destruction of data? Or is it enough to ensure that no one could ever identify an anonymous person from it? In the end, Izzo says, answering that key question requires balancing the privacy rights of consumers and the needs of science and commerce.

    “That’s a pretty difficult, non-trivial question,” Izzo says. “For many of the more complicated models used in practice, even if you delete zero people from a database, retraining alone can result in a completely different model. So even defining the proper target for the retrained model is challenging.”

    With their approximate deletion approach in hand, the authors then validated the effectiveness of their method empirically, confirming their theoretical approach on the path to practical application. That critical step now becomes the goal of future work.

    “We think approximate deletion is an important initial step toward solving what has been a difficult challenge for AI,” Zou says.

    Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

  • Antimicrobial Coating for Orthopedic Implants Prevents Dangerous Infections
    28 September 2021

    Biomedical engineers and surgeons at Duke University and UCLA have demonstrated an antibiotic coating that can be applied to orthopedic implants minutes before surgery that eliminates the chances of an infection around the implant.

    In early trials in mice, the coating prevented all subsequent infections, even without infusions of antibiotics into the bloodstream, which is the current standard of care. After 20 days, the coating did not reduce the bone’s ability to fuse with the implant and was completely absorbed by the body.

    The results appear online September 16 in the journal Nature Communications.

    The project began when Tatiana Segura, professor of biomedical engineering at Duke, met Nicholas Bernthal, interim chair and executive medical director at the David Geffen School of Medicine at UCLA, who specializes in pediatric orthopedic oncology and surgery. He told Segura that many children being treated for bone cancer have large portions of bone removed, which then requires orthopedic implants. But because the patients are usually also undergoing chemotherapy, their immune systems are weak and they are especially vulnerable to bacteria colonizing the surface of the implant.

    “These kids face the choice of having chemotherapy versus saving their limb or even sometimes needing amputations to survive, which sounds horrific to me,” Segura said. “All they really need is something to rub on the implant to stop an infection from taking hold, because preventing an infection is much easier than treating one. So we came up with this coating technology that we hope will provide a solution.”

    Implant infections aren’t unique to children or to cancer patients, however. For joint replacement surgeries, for example, infection occurs in 1% of primary and up to 7% of revision surgeries, which requires repeated revision surgeries and prolonged intravenous antibiotics. Treatment doesn’t always work, however, as these patients have a higher five-year mortality risk than those diagnosed with HIV/AIDS or breast cancer. Implant infections are estimated to cost the health care system more than $8.6 billion annually in the U.S. alone.

    Part of the challenge of treating these infections is that bacteria colonize the surface of the implants themselves. This means that there are no blood vessels flowing through the bacterial colonies to deliver the antibiotics coursing through a patient’s veins. The only recourse is often the removal of the original implant, which is usually the best of what are only bad options.

    Some doctors have taken to their own solutions, such as using antibiotic powder when closing the surgical wound or infusing the bone cement used to hold the implant in place with antibiotics. Neither of these tactics have been proven to be clinically effective. There is also the option of implant manufacturers adding antibiotic properties to their devices. But this would greatly reduce the product’s shelf life and also require a long and complicated process of FDA approval, since the implants would then be in a new classification.

    Segura’s new antibiotic coating sidesteps all of these challenges.

    “We’ve shown that a point-of-care, antibiotic-releasing coating protects implants from bacterial challenge, and can be quickly and safely applied in the operating room without the need to modify existing implants,” said Christopher Hart, a resident physician in UCLA Orthopaedic Surgery who helped conduct the experiments.

    The new antimicrobial coating is made of two polymers, one that repels water and one that mixes well with water. Both are combined in a solution with an antibiotic of the physician’s choosing and then applied directly to the orthopedic implant by dipping, painting or spraying. When exposed to a bright ultraviolet light, the two polymers couple together and self-assemble into a grid-like structure that traps the antibiotics.

    The reaction is an example of “click chemistry,” which is a general way of describing reactions that happen quickly at room temperature, produce only a single reaction product, have an extremely high yield and occur within a single container.

    “This study is a great example of the power of click chemistry in biomedical applications,” said Weixian Xi, now a senior scientist at Illumina who was a postdoctoral researcher at UCLA during the study. “This ‘smart’ and ‘clickable’ polymeric coating enables protections of implants from bacterial infection and makes a personalized approach possible.”

    “Our coating can be personalizable because it can use almost any antibiotic,” Segura continued. “The antibiotic can be chosen by the physician based on where in the body the device is being implanted and what pathogens are common in whatever part of the world the surgery is taking place.”

    The click chemistry polymer grid also has an affinity for metal. Tests involving various types of implants showed that the coating was very difficult to rub off during surgical procedures. Once inside the body, however, the conditions cause the polymer to degrade, slowly releasing the antibiotics over the course of two to three weeks.

    In the study, researchers rigorously tested the coating in mice with either leg or spine implants. After 20 days, the coating did not inhibit the bone’s growth into the implant and prevented 100% of infections. This time period, the researchers say, is long enough to prevent the vast majority of these types of infections from occurring.

    The researchers have not yet tested their coating on larger animals. Since larger animals—such as humans—have larger bones and need larger implants, there is much more surface area to protect against bacterial infections. But the researchers are confident that their invention is up to the task and plan to pursue the steps needed to commercialize the product.

    “We believe this transdisciplinary work represents the future of surgical implants, providing a point of application coating that transforms the implant from a hotspot for infection into a ‘smart’ antimicrobial therapeutic,” Bernthal said. “You only need to treat a single patient with an infected implant to realize how transformational this could be for patient care —saving both life and limbs for many.”

    This research was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health (5K08AR069112-01, T32AR059033).

    “Point-Of-Care Antimicrobial Coating Protects Orthopaedic Implants From Bacterial Challenge.” Weixian Xi, Vishal Hegde, Stephen D. Zoller, Howard Y. Park, Christopher M. Hart, Takeru Kondo, Christopher D. Hamad, Yan Hu, Amanda H. Loftin, Daniel O. Johansen, Zachary Burke, Samuel Clarkson, Chad Ishmael, Kellyn Hori, Zeinab Mamouei, Hiroko Okawa, Ichiro Nishimura, Nicholas M. Bernthal and Tatiana Segura. Nature Communications, Sept. 17, 2021. DOI: 10.1038/s41467-021-25383-z

  • Male giraffes are more socially connected than females
    28 September 2021

    Although female giraffes have closer “friends” than male giraffes, male giraffes have more “acquaintances” than females, according to a new study by an international team that includes a Penn State biologist. The study demonstrates that giraffes form a complex multilevel society that is driven by differences in the social connections among individuals, which could have conservation implications for the endangered giraffes.

    “The degree to which an animal is connected to others in its social network influences reproductive success and population ecology, spread of information, and even how diseases move through a population,” said Derek Lee, associate research professor at Penn State and an author of the paper. “Information about sociality therefore can provide important guidance for conservation.”

    The research team examined social connectedness and social movements of endangered Masai giraffes in the Tarangire Ecosystem of northern Tanzania using data collected over 5 years. The work, led by Juan Lavista Ferres of the Microsoft AI for Good Research Lab, involved constructing the social network of more than 1,000 free-ranging giraffes. The team presents their results in a paper appearing Sept. 27 in the journal Animal Behaviour.

    “We found that male giraffes overall had higher social connectedness than females, which means males interact with greater numbers of other individuals than females,” said Lee. “Older males had the shortest social path length to all the other giraffes in the network. This might reflect the mating strategy of males, who roam widely across the landscape searching for females to mate with and make connections in the process. Young males had the most social ties and moved most often among groups, reflecting social exploration as they prepare to disperse away from their mothers.”

    According to the study, adult female giraffes tend to have fewer but stronger relationships with each other than males and younger females, a trend that has also been observed in giraffe populations elsewhere in Africa. The researchers previously found that relationships among female giraffes allow them to live longer.

    The results reveal an additional layer of complexity to giraffe societies beyond what was seen in earlier research. Previous research showed that adult females in this population have formed about a dozen distinct groups, or communities, of 60 to 90 individuals that tend to associate more with each other than with members of the other groups, even when the groups use the same spaces. The current study builds on this knowledge and found that the full population, including calves and adult males, has a more complex structure: The female communities are embedded within three socially distinct larger groups called ‘super-communities’ of between 800 to 900 individuals, and one ‘oddball’ super-community of 155 individuals in a small, isolated area.

    “Among giraffes, adult females have enduring social relationships and form distinct and stable social communities with a relatively large number of other females, while, in their perpetual search for mating opportunities, adult males connect the adult female communities, forming super-communities,” said Monica Bond, a postdoctoral research associate at the University of Zurich and an author of the paper. “This type of complex society has evolutionary and conservation advantages, because the dynamics of the social system should allow gene flow between groups, which is an important part of maintaining a healthy and robust population.”

    The current research adds to a growing body of literature demonstrating that giraffes live in a socially structured society, despite the fact that herds have what researchers call “fission-fusion” dynamics, with the size and composition of the population constantly changing as animals move through the environment. Fission-fusion grouping dynamics are common among mammals, such as elephants, bats, some primates, and cetaceans, but, according to the researchers, this study is the first to demonstrate that giraffes reside in a complex society with dynamic herds embedded into stable communities within stable super-communities, all of which are driven by the variation in social connections among individuals.

    “The large scale of the study, in terms of the size of the landscape and the sheer number of animals, enabled us to uncover an upper apex level of social structure that was previously unknown,” said Lavista. “Using Microsoft’s AI tools allowed us to visualize and analyze a large volume data to gain meaningful insights about giraffes over the 5 years of study.”

    The researchers believe the complex nature of giraffe populations could impact conservation efforts for these endangered giraffes, including translocation efforts that move individuals to new areas. They caution that translocating a small number of individuals to new areas should be limited, because such invasive actions destabilize the intricate web of social relationships among giraffes.

    In addition to Lee, Lavista, and Bond, the research team includes Md Nasir and Avleen Bijral at the Microsoft AI for Good Research Lab; Yu-Chia Chen at the University of Washington, Seattle; and Fred Bercovitch at Kyoto University in Japan.

    The research was supported by the Sacramento Zoo, the Columbus Zoo and Aquarium, the Tulsa Zoo, Tierpark Berlin and Zoo Berlin, Zoo Miami, the Cincinnati Zoo and Botanical Garden, and Save the Giraffes

  • Massively Reducing Food Waste Could Feed the World
    28 September 2021
    It would also greatly cut greenhouse gas emissions
    -- Read more on ScientificAmerican.com
  • Bright lava flows, smoke pour from La Palma volcano eruption in new Landsat photos
    28 September 2021
    New satellite images of an active volcano on the Spanish island of La Palma capture vivid streams of lava pouring down the coastal mountain range and nearing the Atlantic Ocean.
  • The coevolution of particle physics and computing
    28 September 2021

    Over time, particle physics and astrophysics and computing have built upon one another’s successes. That coevolution continues today.

    In the mid-twentieth century, particle physicists were peering deeper into the history and makeup of the universe than ever before. Over time, their calculations became too complex to fit on a blackboard—or to farm out to armies of human “computers” doing calculations by hand. 

    To deal with this, they developed some of the world’s earliest electronic computers. 

    Physics has played an important role in the history of computing. The transistor—the switch that controls the flow of electrical signal within a computer—was invented by a group of physicists at Bell Labs. The incredible computational demands of particle physics and astrophysics experiments have consistently pushed the boundaries of what is possible. They have encouraged the development of new technologies to handle tasks from dealing with avalanches of data to simulating interactions on the scales of both the cosmos and the quantum realm. 

    But this influence doesn’t just go one way. Computing plays an essential role in particle physics and astrophysics as well. As computing has grown increasingly more sophisticated, its own progress has enabled new scientific discoveries and breakthroughs. 

    Illustration by Sandbox Studio, Chicago with Ariel Davis
    Managing an onslaught of data

    In 1973, scientists at Fermi National Accelerator Laboratory in Illinois got their first big mainframe computer: a 7-year-old hand-me-down from Lawrence Berkeley National Laboratory. Called the CDC 6600, it weighed about 6 tons. Over the next five years, Fermilab added five more large mainframe computers to its collection.

    Then came the completion of the Tevatron—at the time, the world’s highest-energy particle accelerator—which would provide the particle beams for numerous experiments at the lab. By the mid-1990s, two four-story particle detectors would begin selecting, storing and analyzing data from millions of particle collisions at the Tevatron per second. Called the Collider Detector at Fermilab and the DZero detector, these new experiments threatened to overpower the lab’s computational abilities.

    In December of 1983, a committee of physicists and computer scientists released a 103-page report highlighting the “urgent need for an upgrading of the laboratory’s computer facilities.” The report said the lab “should continue the process of catching up” in terms of computing ability, and that “this should remain the laboratory’s top computing priority for the next few years.”

    Instead of simply buying more large computers (which were incredibly expensive), the committee suggested a new approach: They recommended increasing computational power by distributing the burden over clusters or “farms” of hundreds of smaller computers. 

    Thanks to Intel’s 1971 development of a new commercially available microprocessor the size of a domino, computers were shrinking. Fermilab was one of the first national labs to try the concept of clustering these smaller computers together, treating each particle collision as a computationally independent event that could be analyzed on its own processor.  

    Like many new ideas in science, it wasn’t accepted without some pushback. 

    Joel Butler, a physicist at Fermilab who was on the computing committee, recalls, “There was a big fight about whether this was a good idea or a bad idea.”

    A lot of people were enchanted with the big computers, he says. They were impressive-looking and reliable, and people knew how to use them. And then along came “this swarm of little tiny devices, packaged in breadbox-sized enclosures.” 

    The computers were unfamiliar, and the companies building them weren’t well-established. On top of that, it wasn’t clear how well the clustering strategy would work. 

    As for Butler? “I raised my hand [at a meeting] and said, ‘Good idea’—and suddenly my entire career shifted from building detectors and beamlines to doing computing,” he chuckles. 

    Not long afterward, innovation that sparked for the benefit of particle physics enabled another leap in computing. In 1989, Tim Berners-Lee, a computer scientist at CERN, launched the World Wide Web to help CERN physicists share data with research collaborators all over the world. 

    To be clear, Berners-Lee didn’t create the internet—that was already underway in the form the ARPANET, developed by the US Department of Defense. But the ARPANET connected only a few hundred computers, and it was difficult to share information across machines with different operating systems. 

    The web Berners-Lee created was an application that ran on the internet, like email, and started as a collection of documents connected by hyperlinks. To get around the problem of accessing files between different types of computers, he developed HTML (HyperText Markup Language), a programming language that formatted and displayed files in a web browser independent of the local computer’s operating system. 

    Berners-Lee also developed the first web browser, allowing users to access files stored on the first web server (Berners-Lee’s computer at CERN). He implemented the concept of a URL (Uniform Resource Locator), specifying how and where to access desired web pages. 

    What started out as an internal project to help particle physicists share data within their institution fundamentally changed not just computing, but how most people experience the digital world today. 

    Back at Fermilab, cluster computing wound up working well for handling the Tevatron data. Eventually, it became industry standard for tech giants like Google and Amazon. 

    Over the next decade, other US national laboratories adopted the idea, too. SLAC National Accelerator Laboratory—then called Stanford Linear Accelerator Center—transitioned from big mainframes to clusters of smaller computers to prepare for its own extremely data-hungry experiment, BaBar. Both SLAC and Fermilab also were early adopters of Lee’s web server. The labs set up the first two websites in the United States, paving the way for this innovation to spread across the continent.

    In 1989, in recognition of the growing importance of computing in physics, Fermilab Director John Peoples elevated the computing department to a full-fledged division. The head of a division reports directly to the lab director, making it easier to get resources and set priorities. Physicist Tom Nash formed the new Computing Division, along with Butler and two other scientists, Irwin Gaines and Victoria White. Butler led the division from 1994 to 1998. 

    High-performance computing in particle physics and astrophysics

    These computational systems worked well for particle physicists for a long time, says Berkeley Lab astrophysicist Peter Nugent. That is, until Moore’s Law started grinding to a halt.  

    Moore’s Law is the idea that the number of transistors in a circuit will double, making computers faster and cheaper, every two years. The term was first coined in the mid-1970s, and the trend reliably proceeded for decades. But now, computer manufacturers are starting to hit the physical limit of how many tiny transistors they can cram onto a single microchip. 

    Because of this, says Nugent, particle physicists have been looking to take advantage of high-performance computing instead. 

    Nugent says high-performance computing is “something more than a cluster, or a cloud-computing environment that you could get from Google or AWS, or at your local university.” 

    What it typically means, he says, is that you have high-speed networking between computational nodes, allowing them to share information with each other very, very quickly. When you are computing on up to hundreds of thousands of nodes simultaneously, it massively speeds up the process. 

    On a single traditional computer, he says, 100 million CPU hours translates to more than 11,000 years of continuous calculations. But for scientists using a high-performance computing facility at Berkeley Lab, Argonne National Laboratory or Oak Ridge National Laboratory, 100 million hours is a typical, large allocation for one year at these facilities.

    Although astrophysicists have always relied on high-performance computing for simulating the birth of stars or modeling the evolution of the cosmos, Nugent says they are now using it for their data analysis as well. 

    This includes rapid image-processing computations that have enabled the observations of several supernovae, including SN 2011fe, captured just after it began. “We found it just a few hours after it exploded, all because we were able to run these pipelines so efficiently and quickly,” Nugent says. 

    According to Berkeley Lab physicist Paolo Calafiura, particle physicists also use high-performance computing for simulations—for modeling not the evolution of the cosmos, but rather what happens inside a particle detector. “Detector simulation is significantly the most computing-intensive problem that we have,” he says. 

    Scientists need to evaluate multiple possibilities for what can happen when particles collide. To properly correct for detector effects when analyzing particle detector experiments, they need to simulate more data than they collect. “If you collect 1 billion collision events a year,” Calafiura says, “you want to simulate 10 billion collision events.”

    Calafiura says that right now, he’s more worried about finding a way to store all of the simulated and actual detector data than he is about producing it, but he knows that won’t last. 

    “When does physics push computing?” he says. “When computing is not good enough… We see that in five years, computers will not be powerful enough for our problems, so we are pushing hard with some radically new ideas, and lots of detailed optimization work.”

    That’s why the Department of Energy’s Exascale Computing Project aims to build, in the next few years, computers capable of performing a quintillion (that is, a billion billion) operations per second. The new computers will be 1000 times faster than the current fastest computers. 

    The exascale computers will also be used for other applications ranging from precision medicine to climate modeling to national security.

    Machine learning and quantum computing

    Innovations in computer hardware have enabled astrophysicists to push the kinds of simulations and analyses they can do. For example, Nugent says, the introduction of graphics processing units has sped up astrophysicists’ ability to do calculations used in machine learning, leading to an explosive growth of machine learning in astrophysics. 

    With machine learning, which uses algorithms and statistics to identify patterns in data, astrophysicists can simulate entire universes in microseconds. 

    Machine learning has been important in particle physics as well, says Fermilab scientist Nhan Tran. “[Physicists] have very high-dimensional data, very complex data,” he says. “Machine learning is an optimal way to find interesting structures in that data.”

    The same way a computer can be trained to tell the difference between cats and dogs in pictures, it can learn how to identify particles from physics datasets, distinguishing between things like pions and photons. 

    Tran says using computation this way can accelerate discovery. “As physicists, we’ve been able to learn a lot about particle physics and nature using non-machine-learning algorithms,” he says. “But machine learning can drastically accelerate and augment that process—and potentially provide deeper insight into the data.” 

    And while teams of researchers are busy building exascale computers, others are hard at work trying to build another type of supercomputer: the quantum computer.

    Remember Moore’s Law? Previously, engineers were able to make computer chips faster by shrinking the size of electrical circuits, reducing the amount of time it takes for electrical signals to travel. “Now our technology is so good that literally the distance between transistors is the size of an atom,” Tran says. “So we can’t keep scaling down the technology and expect the same gains we’ve seen in the past."

    To get around this, some researchers are redefining how computation works at a fundamental level—like, really fundamental. 

    The basic unit of data in a classical computer is called a bit, which can hold one of two values: 1, if it has an electrical signal, or 0, if it has none. But in quantum computing, data is stored in quantum systems—things like electrons, which have either up or down spins, or photons, which are polarized either vertically or horizontally. These data units are called “qubits.”

    Here’s where it gets weird. Through a quantum property called superposition, qubits have more than just two possible states. An electron can be up, down, or in a variety of stages in between.

    What does this mean for computing? A collection of three classical bits can exist in only one of eight possible configurations: 000, 001, 010, 100, 011, 110, 101 or 111. But through superposition, three qubits can be in all eight of these configurations at once. A quantum computer can use that information to tackle problems that are impossible to solve with a classical computer.

    Fermilab scientist Aaron Chou likens quantum problem-solving to throwing a pebble into a pond. The ripples move through the water in every possible direction, “simultaneously exploring all of the possible things that it might encounter.” 

    In contrast, a classical computer can only move in one direction at a time. 

    But this makes quantum computers faster than classical computers only when it comes to solving certain types of problems. “It’s not like you can take any classical algorithm and put it on a quantum computer and make it better,” says University of California, Santa Barbara physicist John Martinis, who helped build Google’s quantum computer. 

    Although quantum computers work in a fundamentally different way than classical computers, designing and building them wouldn’t be possible without traditional computing laying the foundation, Martinis says. “We're really piggybacking on a lot of the technology of the last 50 years or more.”

    The kinds of problems that are well suited to quantum computing are intrinsically quantum mechanical in nature, says Chou.

    For instance, Martinis says, consider quantum chemistry. Solving quantum chemistry problems with classical computers is so difficult, he says, that 10 to 15% of the world’s supercomputer usage is currently dedicated to the task. “Quantum chemistry problems are hard for the very reason why a quantum computer is powerful”—because to complete them, you have to consider all the different quantum-mechanical states of all the individual atoms involved. 

    Because making better quantum computers would be so useful in physics research, and because building them requires skills and knowledge that physicists possess, physicists are ramping up their quantum efforts. In the United States, the National Quantum Initiative Act of 2018 called for the National Institute of Standards and Technology, the National Science Foundation and the Department of Energy to support programs, centers and consortia devoted to quantum information science. 

    Coevolution requires cooperation 

    In the early days of computational physics, the line between who was a particle physicist and who was a computer scientist could be fuzzy. Physicists used commercially available microprocessors to build custom computers for experiments. They also wrote much of their own software—ranging from printer drivers to the software that coordinated the analysis between the clustered computers. 

    Nowadays, roles have somewhat shifted. Most physicists use commercially available devices and software, allowing them to focus more on the physics, Butler says. But some people, like Anshu Dubey, work right at the intersection of the two fields. Dubey is a computational scientist at Argonne National Laboratory who works with computational physicists. 

    When a physicist needs to computationally interpret or model a phenomenon, sometimes they will sign up a student or postdoc in their research group for a programming course or two and then ask them to write the code to do the job. Although these codes are mathematically complex, Dubey says, they aren’t logically complex, making them relatively easy to write.  

    A simulation of a single physical phenomenon can be neatly packaged within fairly straightforward code. “But the real world doesn’t want to cooperate with you in terms of its modularity and encapsularity,” she says. 

    Multiple forces are always at play, so to accurately model real-world complexity, you have to use more complex software—ideally software that doesn’t become impossible to maintain as it gets updated over time. “All of a sudden,” says Dubey, “you start to require people who are creative in their own right—in terms of being able to architect software.” 

    That’s where people like Dubey come in. At Argonne, Dubey develops software that researchers use to model complex multi-physics systems—incorporating processes like fluid dynamics, radiation transfer and nuclear burning.

    Hiring computer scientists for research projects in physics and other fields of science can be a challenge, Dubey says. Most funding agencies specify that research money can be used for hiring students and postdocs, but not paying for software development or hiring dedicated engineers. “There is no viable career path in academia for people whose careers are like mine,” she says.

    In an ideal world, universities would establish endowed positions for a team of research software engineers in physics departments with a nontrivial amount of computational research, Dubey says. These engineers would write reliable, well-architected code, and their institutional knowledge would stay with a team. 

    Physics and computing have been closely intertwined for decades. However the two develop—toward new analyses using artificial intelligence, for example, or toward the creation of better and better quantum computers—it seems they will remain on this path together.

  • Unbreakable glass inspired by seashells
    28 September 2021
    Scientists from McGill University develop stronger and tougher glass, inspired by the inner layer of mollusk shells. Instead of shattering upon impact, the new material has the resiliency of plastic and could be used to improve cell phone screens in the future, among other applications.

Science blogs

Science Blogs

28 September 2021

Science Blogs
  • Ancient disaster destroyed Biblical city Tall el-Hammam
    28 September 2021

    In the Middle Bronze Age (about 3600 years ago or roughly 1650 BCE), the city of Tall el-Hammam was ascendant. Located on high ground in the southern Jordan Valley, northeast of the Dead Sea, the settlement in its time had become the largest continuously occupied Bronze Age city in the southern Levant, having hosted early civilization for a few thousand years. At that time, it was 10 times larger than Jerusalem and 5 times larger than Jericho.

    “It’s an incredibly culturally important area,” said James Kennett, emeritus professor of earth science at the UC Santa Barbara. “Much of where the early cultural complexity of humans developed is in this general area.”

    A favorite site for archaeologists and biblical scholars, the mound hosts evidence of culture all the way from the Chalcolithic, or Copper Age, all compacted into layers as the highly strategic settlement was built, destroyed, and rebuilt over millennia.

    But there is a 1.5-meter interval in the Middle Bronze Age II stratum that caught the interest of some researchers, for its “highly unusual” materials. In addition to the debris one would expect from destruction via warfare and earthquakes, they found pottery shards with outer surfaces melted into glass, “bubbled” mudbrick, and partially melted building material, all indications of an anomalously high-temperature event, much hotter than anything the technology of the time could produce.

    “We saw evidence for temperatures greater than 2,000 degrees Celsius,” said Kennett, whose research group at the time happened to have been building the case for an older cosmic airburst about 12,800 years ago that triggered major widespread burning, climatic changes and animal extinctions. The charred and melted materials at Tall el-Hammam looked familiar, and a group of researchers including impact scientist Allen West and Kennett joined Trinity Southwest University biblical scholar Philip J. Silvia’s research effort to determine what happened at this city 3,650 years ago.

    Their results are published in the journal Nature Scientific Reports.

    Salt and Bone
    “There’s evidence of a large cosmic airburst, close to this city called Tall el-Hammam,” Kennett said, of an explosion similar to the Tunguska Event, a roughly 12-megaton airburst that occurred in 1908, when a 56-60-meter meteor pierced the Earth’s atmosphere over the Eastern Siberian Taiga.

    The shock of the explosion over Tall el-Hammam was enough to level the city, flattening the palace and surrounding walls and mudbrick structures, according to the paper, and the distribution of bones indicated “extreme disarticulation and skeletal fragmentation in nearby humans.”

    For Kennett, further proof of the airburst was found by conducting many different kinds of analyses on soil and sediments from the critical layer. Tiny iron- and silica-rich spherules turned up in their analysis, as did melted metals.

    “I think one of the main discoveries is shocked quartz. These are sand grains containing cracks that form only under very high pressure” Kennett said of one of many lines of evidence that point to a large airburst near Tall el-Hammam. “We have shocked quartz from this layer, and that means there were incredible pressures involved to shock the quartz crystals— quartz is one of the hardest minerals; it’s very hard to shock.”

    The airburst, according to the paper, may also explain the “anomalously high concentrations of salt” found in the destruction layer — an average of 4% in the sediment and as high as 25% in some samples.

    “The salt was thrown up due to the high impact pressures,” Kennett said, of the meteor that likely fragmented upon contact with the Earth’s atmosphere. “And it may be that the impact partially hit the Dead Sea, which is rich in salt.” The local shores of the Dead Sea are also salt-rich so the impact may have redistributed those salt crystals far and wide — not just at Tall el-Hammam, but also nearby Tell es-Sultan (proposed as the biblical Jericho, which also underwent violent destruction at the same time) and Tall-Nimrin (also then destroyed).

    The high-salinity soil could have been responsible for the so-called “Late Bronze Age Gap,” the researchers say, in which cities along the lower Jordan Valley were abandoned, dropping the population from tens of thousands to maybe a few hundred nomads. Nothing could grow in these formerly fertile grounds, forcing people to leave the area for centuries. Evidence for resettlement of Tall el-Hammam and nearby communities appears again in the Iron Age, roughly 600 years after the cities’ sudden devastation in the Bronze Age.

    Fire and Brimstone
    Tall el-Hamman has been the focus of an ongoing debate as to whether it could be the biblical city of Sodom, one of the two cities in the Old Testament Book of Genesis that were destroyed by God for how wicked they and their inhabitants had become. One denizen, Lot, is saved by two angels who instruct him not to look behind as they flee. Lot’s wife, however, lingers and is turned into a pillar of salt. Meanwhile, fire and brimstone fell from the sky; multiple cities were destroyed; thick smoke rose from the fires; city inhabitants were killed and area crops were destroyed in what sounds like an eyewitness account of a cosmic impact event. It’s a satisfying connection to make.

    “All the observations stated in Genesis are consistent with a cosmic airburst,” Kennett said, “but there’s no scientific proof that this destroyed city is indeed the Sodom of the Old Testament.” However, the researchers said, the disaster could have generated an oral tradition that may have served as the inspiration for the written account in the book of Genesis, as well as the biblical account of the burning of Jericho in the Old Testament Book of Joshua.

  • Happiness in Early Adulthood May Protect Against Dementia
    28 September 2021

    While research has shown that poor cardiovascular health can damage blood flow to the brain increasing the risk for dementia, a new study led by UC San Francisco indicates that poor mental health may also take its toll on cognition.

    The research adds to a body of evidence that links depression with dementia, but while most studies have pointed to its association in later life, the UCSF study shows that depression in early adulthood may lead to lower cognition 10 years later and to cognitive decline in old age.

    The study publishes in the Journal of Alzheimer’s Disease on Sept. 28, 2021.

    The researchers used innovative statistical methods to predict average trajectories of depressive symptoms for approximately 15,000 participants ages 20 to 89, divided into three life stages: older, midlife and young adulthood. They then applied these predicted trajectories and found that in a group of approximately 6,000 older participants, the odds of cognitive impairment were 73 percent higher for those estimated to have elevated depressive symptoms in early adulthood, and 43 percent higher for those estimated to have elevated depressive symptoms in later life.

    These results were adjusted for depressive symptoms in other life stages and for differences in age, sex, race, educational attainment, body mass index, history of diabetes and smoking status. For depressive symptoms in midlife, the researchers found an association with cognitive impairment, but this was discounted when they adjusted for depression in other life stages.

    Excess Stress Hormones May Damage Ability to Make New Memories

    “Several mechanisms explain how depression might increase dementia risk,” said first author Willa Brenowitz, PhD, MPH, of the UCSF Department of Psychiatry and Behavioral Sciences and the UCSF Weill Institute for Neurosciences. “Among them is that hyperactivity of the central stress response system increases production of the stress hormones glucocorticoids, leading to damage of the hippocampus, the part of the brain essential for forming, organizing and storing new memories.”

    Other studies have linked depression with atrophy of the hippocampus, and one study has shown faster rates of volume loss in women, she said.

    In estimating the depressive symptoms across each life stage, researchers pooled data from younger participants with data from the approximately 6,000 older participants and predicted average trajectories. These participants, whose average age was 72 at the start of the study and lived at home, had been enrolled by the Health Aging and Body Composition Study and the Cardiovascular Health Study. They were followed annually or semi-annually for up to 11 years.

    U-Shaped Curve Adds Credence to Predicted Trajectories

    While assumed values were used, the authors stated, no longitudinal studies have been completed across the life course. “Imputed depressive symptom trajectories fit a U-shaped curve, similar to age-related trends in other research,” they noted.

    Participants were screened for depression using a tool called the CESD-10, a 10-item questionnaire assessing symptoms in the past week. Moderate or high depressive symptoms were found in 13 percent of young adults, 26 percent of midlife adults and 34 percent of older participants.

    Some 1,277 participants were diagnosed with cognitive impairment following neuropsychological testing, evidence of global decline, documented use of a dementia medication or hospitalization with dementia as a primary or secondary diagnosis.

    “Generally, we found that the greater the depressive symptoms, the lower the cognition and the faster the rates of decline,” said Brenowitz, who is also affiliated with the UCSF Department of Epidemiology and Biostatistics. “Older adults estimated to have moderate or high depressive symptoms in early adulthood were found to experience a drop in cognition over 10 years.”

    With up to 20 percent of the population suffering from depression during their lifetime, it’s important to recognize its role in cognitive aging, said senior author Kristine Yaffe, MD, of the UCSF departments of Psychiatry and Behavioral Sciences, and Epidemiology and Biostatistics. “Future work will be needed to confirm these findings, but in the meantime, we should screen and treat depression for many reasons.”

    Co-Authors: Eric Vittinghoff, PhD, from UCSF; Adina Zeki Al Hazzouri, PhD, from Columbia University; Sherita H. Golden, MD, from Johns Hopkins University School of Medicine; and Annette L. Fitzpatrick, PhD, from University of Washington.

    Funding: National Institutes of Health and National Institute on Aging (1RF1AG054443).

  • Humans may have hatched and raised deadly cassowary chicks
    28 September 2021

    As early as 18,000 years ago, humans in New Guinea may have collected cassowary eggs near maturity and then raised the birds to adulthood, according to an international team of scientists, who used eggshells to determine the developmental stage of the ancient embryos/chicks when the eggs cracked.

    “This behavior that we are seeing is coming thousands of years before domestication of the chicken,” said Kristina Douglass, assistant professor of anthropology and African studies, Penn State. “And this is not some small fowl, it is a huge, ornery, flightless bird that can eviscerate you. Most likely the dwarf variety that weighs 20 kilos (44 pounds).”

    The researchers report today (Sept. 27) in the Proceedings of the National Academy of Sciences that “the data presented here may represent the earliest indication of human management of the breeding of an avian taxon anywhere in the world, preceding the early domestication of chicken and geese by several millennia.”

    Cassowaries are not chickens; in fact, they bear more resemblance to velociraptors than most domesticated birds. “However, cassowary chicks imprint readily to humans and are easy to maintain and raise up to adult size,” the researchers report. Imprinting occurs when a newly hatched bird decides that the first thing it sees is its mother. If that first glance happens to catch sight of a human, the bird will follow the human anywhere.

    According to the researchers, cassowary chicks are still traded as a commodity in New Guinea.

    Importance of eggshells

    Eggshells are part of the assemblage of many archeological sites, but according to Douglass, archaeologists do not often study them. The researchers developed a new method to determine how old a chick embryo was when an egg was harvested. They reported this work in a recent issue of the Journal of Archaeological Science.

    “I’ve worked on eggshells from archaeological sites for many years,” said Douglass. “I discovered research on turkey eggshells that showed changes in the eggshells over the course of development that were an indication of age. I decided this would be a useful approach.”

    The age assignment of the embryos/chicks depends on the 3-dimensional features of the inside of the shell. To develop the method needed to determine the eggs’ developmental age when the shells broke, the researchers used ostrich eggs from a study done to improve ostrich reproduction.  Researchers at the Oudtshoorn Research Farm, part of the Western Cape Government of South Africa, harvested three eggs every day of incubation for 42 days for their study and supplied Douglass and her team with samples from 126 ostrich eggs.

    They took four samples from each of these eggs for a total of 504 shell samples, each having a specific age. They created high-resolution, 3D images of the shell samples. By inspecting the inside of these eggs, the researcher created a statistical assessment of what the eggs looked like during stages of incubation. The researchers then tested their model with modern ostrich and emu eggs of known age.

    The insides of the eggshells change through development because the developing chicks get calcium from the eggshell. Pits begin to appear in the middle of development.

    “It is time dependent, but a little more complicated,” said Douglass. “We used a combination of 3D imaging, modeling and morphological descriptions.”

    The researchers then turned to legacy shell collections from two sites in New Guinea — Yuku and Kiowa. They applied their approach to more than 1,000 fragments of these 18,000- to 6,000-year-old eggs.

    “What we found was that a large majority of the eggshells were harvested during late stages,” said Douglass. “The eggshells look very late; the pattern is not random. They were either into eating baluts or they are hatching chicks.”

    A balut is a nearly developed embryo chick usually boiled and eaten as street food in parts of Asia.

    The original archaeologists found no indication of penning for the cassowaries. The few cassowary bones found at sites are only those of the meaty portions — leg and thigh — suggesting these were hunted birds, processed in the wild and only the meatiest parts got hauled home.

    “We also looked at burning on the eggshells,” said Douglass. “There are enough samples of late stage eggshells that do not show burning that we can say they were hatching and not eating them.”

    To successfully hatch and raise cassowary chicks, the people would need to know where the nests were, know when the eggs were laid and remove them from the nest just before hatching. Back in the late Pleistocene, according to Douglass, humans were purposefully collecting these eggs and this study suggests people were not just harvesting eggs to eat the contents.

    Also working on this project from Penn State were Priyangi Bulathsinhala, assistant teaching professor of statistics; Tim Tighe, assistant research professor, Materials Research Institute; and Andrew L. Mack, grants and contract coordinator, Penn State Altoona.

    Others working on the project include Dylan Gaffney, graduate student, University of Cambridge, U.K.; Theresa J. Feo, senior science officer, California Council of Science and Technology; and Megan Spitzer, research assistant; Scott Whittaker, manager, scientific imaging; Helen James, research zoologist and curator of birds; and Torben Rick, curator of North American Archaeology, all at the Natural Museum of Natural History, Smithsonian Institution. Glenn R. Summerhayes, professor of archaeology, University of Otago, New Zealand; and Zanell Brand, production scientist, Oudtshoorn Research Farm, Elsenburg, Department of Agriculture, Western Cape Government, South Africa, also worked on the project.

    The Smithsonian National Museum of Natural History, the National Science Foundation and Penn State’s College of the Liberal Arts supported this work.

  • A new approach to the data-deletion conundrum
    28 September 2021

    Rising consumer concern over data privacy has led to a rush of “right to be forgotten” laws around the world that allow individuals to request their personal data be expunged from massive databases that catalog our increasingly online lives. Researchers in artificial intelligence have observed that user data does not only exist in its raw form in a database, it is also implicitly contained in models trained on that data. So far, they have struggled to find methods for deleting these “traces” of users efficiently. The more complex the model is, the more challenging it becomes to delete data.

    “The exact deletion of data — the ideal — is hard to do in real time,” says James Zou, a professor of biomedical data science at Stanford University and an expert in artificial intelligence. “In training our machine learning models, bits and pieces of data can get embedded in the model in complicated ways. That makes it hard for us to guarantee a user has truly been forgotten without altering our models substantially.”

    Zou is senior author of a paper recently presented at the International Conference on Artificial Intelligence and Statistics (AISTATS) that may provide a possible answer to the data deletion problem that works for privacy-concerned individuals and artificial intelligence experts alike. They call it approximate deletion.

    Read the study: Approximate Data Deletion from Machine Learning Models

    “Approximate deletion, as the name suggests, allows us to remove most of the users’ implicit data from the model. They are ‘forgotten,’ but in such a way that we can do the retraining of our models at a later, more opportune time,” says Zach Izzo, a graduate student in mathematics and the first author of the AISTATS paper.

    Approximate deletion is especially useful in quickly removing sensitive information or features unique to a given individual that could potentially be used for identification after the fact, while postponing the computationally intensive full model retraining to times of lower computational demand. Under certain assumptions, Zou says, approximate deletion even achieves the holy grail of exact deletion of a user’s implicit data from the trained model.

    Driven by Data

    Machine learning works by combing databases and applying various predictive weights to features in the data — an online shopper’s age, location, and previous purchase history, for instance, or a streamer’s past viewing history and personal ratings of movies  watched. The models are not confined to commercial applications and are now widely used in radiology, pathology, and other fields of direct human impact.

    In theory, information in a database is anonymized, but users concerned about privacy fear that they can still be identified by the bits and pieces of information about them that are still wedged in the models, begetting the need for right to be forgotten laws.

    The gold standard in the field, Izzo says, is to find the exact same model as if the machine learning had never seen the deleted data points in the first place. That standard, known as “exact deletion,” is hard if not impossible to achieve, especially with large, complicated models like those that recommend products or movies to online shoppers and streamers. Exact data deletion effectively means retraining a model from scratch, Izzo says.

    “Doing that requires taking the algorithm offline for retraining. And that costs real money and real time,” he says.

    What is Approximate Deletion?

    In solving the deletion conundrum, Zou and Izzo have come at things slightly differently than their counterparts in the field. In effect, they create synthetic data to replace — or, more accurately, negate — that of the individual who wishes to be forgotten.

    This temporary solution satisfies the privacy-minded individual’s immediate desire to not be identified from data in the model — that is, to be forgotten — while reassuring the computer scientists, and the businesses that rely upon them, that their models will work as planned, at least until a more opportune time when the model can be retrained at lower cost.

    There is a philosophical aspect to the challenge, the authors say. Where privacy, law, and commerce intersect, the discussion begins with a meaningful definition of what it means to “delete” information. Does deletion mean the actual destruction of data? Or is it enough to ensure that no one could ever identify an anonymous person from it? In the end, Izzo says, answering that key question requires balancing the privacy rights of consumers and the needs of science and commerce.

    “That’s a pretty difficult, non-trivial question,” Izzo says. “For many of the more complicated models used in practice, even if you delete zero people from a database, retraining alone can result in a completely different model. So even defining the proper target for the retrained model is challenging.”

    With their approximate deletion approach in hand, the authors then validated the effectiveness of their method empirically, confirming their theoretical approach on the path to practical application. That critical step now becomes the goal of future work.

    “We think approximate deletion is an important initial step toward solving what has been a difficult challenge for AI,” Zou says.

    Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

  • Antimicrobial Coating for Orthopedic Implants Prevents Dangerous Infections
    28 September 2021

    Biomedical engineers and surgeons at Duke University and UCLA have demonstrated an antibiotic coating that can be applied to orthopedic implants minutes before surgery that eliminates the chances of an infection around the implant.

    In early trials in mice, the coating prevented all subsequent infections, even without infusions of antibiotics into the bloodstream, which is the current standard of care. After 20 days, the coating did not reduce the bone’s ability to fuse with the implant and was completely absorbed by the body.

    The results appear online September 16 in the journal Nature Communications.

    The project began when Tatiana Segura, professor of biomedical engineering at Duke, met Nicholas Bernthal, interim chair and executive medical director at the David Geffen School of Medicine at UCLA, who specializes in pediatric orthopedic oncology and surgery. He told Segura that many children being treated for bone cancer have large portions of bone removed, which then requires orthopedic implants. But because the patients are usually also undergoing chemotherapy, their immune systems are weak and they are especially vulnerable to bacteria colonizing the surface of the implant.

    “These kids face the choice of having chemotherapy versus saving their limb or even sometimes needing amputations to survive, which sounds horrific to me,” Segura said. “All they really need is something to rub on the implant to stop an infection from taking hold, because preventing an infection is much easier than treating one. So we came up with this coating technology that we hope will provide a solution.”

    Implant infections aren’t unique to children or to cancer patients, however. For joint replacement surgeries, for example, infection occurs in 1% of primary and up to 7% of revision surgeries, which requires repeated revision surgeries and prolonged intravenous antibiotics. Treatment doesn’t always work, however, as these patients have a higher five-year mortality risk than those diagnosed with HIV/AIDS or breast cancer. Implant infections are estimated to cost the health care system more than $8.6 billion annually in the U.S. alone.

    Part of the challenge of treating these infections is that bacteria colonize the surface of the implants themselves. This means that there are no blood vessels flowing through the bacterial colonies to deliver the antibiotics coursing through a patient’s veins. The only recourse is often the removal of the original implant, which is usually the best of what are only bad options.

    Some doctors have taken to their own solutions, such as using antibiotic powder when closing the surgical wound or infusing the bone cement used to hold the implant in place with antibiotics. Neither of these tactics have been proven to be clinically effective. There is also the option of implant manufacturers adding antibiotic properties to their devices. But this would greatly reduce the product’s shelf life and also require a long and complicated process of FDA approval, since the implants would then be in a new classification.

    Segura’s new antibiotic coating sidesteps all of these challenges.

    “We’ve shown that a point-of-care, antibiotic-releasing coating protects implants from bacterial challenge, and can be quickly and safely applied in the operating room without the need to modify existing implants,” said Christopher Hart, a resident physician in UCLA Orthopaedic Surgery who helped conduct the experiments.

    The new antimicrobial coating is made of two polymers, one that repels water and one that mixes well with water. Both are combined in a solution with an antibiotic of the physician’s choosing and then applied directly to the orthopedic implant by dipping, painting or spraying. When exposed to a bright ultraviolet light, the two polymers couple together and self-assemble into a grid-like structure that traps the antibiotics.

    The reaction is an example of “click chemistry,” which is a general way of describing reactions that happen quickly at room temperature, produce only a single reaction product, have an extremely high yield and occur within a single container.

    “This study is a great example of the power of click chemistry in biomedical applications,” said Weixian Xi, now a senior scientist at Illumina who was a postdoctoral researcher at UCLA during the study. “This ‘smart’ and ‘clickable’ polymeric coating enables protections of implants from bacterial infection and makes a personalized approach possible.”

    “Our coating can be personalizable because it can use almost any antibiotic,” Segura continued. “The antibiotic can be chosen by the physician based on where in the body the device is being implanted and what pathogens are common in whatever part of the world the surgery is taking place.”

    The click chemistry polymer grid also has an affinity for metal. Tests involving various types of implants showed that the coating was very difficult to rub off during surgical procedures. Once inside the body, however, the conditions cause the polymer to degrade, slowly releasing the antibiotics over the course of two to three weeks.

    In the study, researchers rigorously tested the coating in mice with either leg or spine implants. After 20 days, the coating did not inhibit the bone’s growth into the implant and prevented 100% of infections. This time period, the researchers say, is long enough to prevent the vast majority of these types of infections from occurring.

    The researchers have not yet tested their coating on larger animals. Since larger animals—such as humans—have larger bones and need larger implants, there is much more surface area to protect against bacterial infections. But the researchers are confident that their invention is up to the task and plan to pursue the steps needed to commercialize the product.

    “We believe this transdisciplinary work represents the future of surgical implants, providing a point of application coating that transforms the implant from a hotspot for infection into a ‘smart’ antimicrobial therapeutic,” Bernthal said. “You only need to treat a single patient with an infected implant to realize how transformational this could be for patient care —saving both life and limbs for many.”

    This research was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health (5K08AR069112-01, T32AR059033).

    “Point-Of-Care Antimicrobial Coating Protects Orthopaedic Implants From Bacterial Challenge.” Weixian Xi, Vishal Hegde, Stephen D. Zoller, Howard Y. Park, Christopher M. Hart, Takeru Kondo, Christopher D. Hamad, Yan Hu, Amanda H. Loftin, Daniel O. Johansen, Zachary Burke, Samuel Clarkson, Chad Ishmael, Kellyn Hori, Zeinab Mamouei, Hiroko Okawa, Ichiro Nishimura, Nicholas M. Bernthal and Tatiana Segura. Nature Communications, Sept. 17, 2021. DOI: 10.1038/s41467-021-25383-z

  • Male giraffes are more socially connected than females
    28 September 2021

    Although female giraffes have closer “friends” than male giraffes, male giraffes have more “acquaintances” than females, according to a new study by an international team that includes a Penn State biologist. The study demonstrates that giraffes form a complex multilevel society that is driven by differences in the social connections among individuals, which could have conservation implications for the endangered giraffes.

    “The degree to which an animal is connected to others in its social network influences reproductive success and population ecology, spread of information, and even how diseases move through a population,” said Derek Lee, associate research professor at Penn State and an author of the paper. “Information about sociality therefore can provide important guidance for conservation.”

    The research team examined social connectedness and social movements of endangered Masai giraffes in the Tarangire Ecosystem of northern Tanzania using data collected over 5 years. The work, led by Juan Lavista Ferres of the Microsoft AI for Good Research Lab, involved constructing the social network of more than 1,000 free-ranging giraffes. The team presents their results in a paper appearing Sept. 27 in the journal Animal Behaviour.

    “We found that male giraffes overall had higher social connectedness than females, which means males interact with greater numbers of other individuals than females,” said Lee. “Older males had the shortest social path length to all the other giraffes in the network. This might reflect the mating strategy of males, who roam widely across the landscape searching for females to mate with and make connections in the process. Young males had the most social ties and moved most often among groups, reflecting social exploration as they prepare to disperse away from their mothers.”

    According to the study, adult female giraffes tend to have fewer but stronger relationships with each other than males and younger females, a trend that has also been observed in giraffe populations elsewhere in Africa. The researchers previously found that relationships among female giraffes allow them to live longer.

    The results reveal an additional layer of complexity to giraffe societies beyond what was seen in earlier research. Previous research showed that adult females in this population have formed about a dozen distinct groups, or communities, of 60 to 90 individuals that tend to associate more with each other than with members of the other groups, even when the groups use the same spaces. The current study builds on this knowledge and found that the full population, including calves and adult males, has a more complex structure: The female communities are embedded within three socially distinct larger groups called ‘super-communities’ of between 800 to 900 individuals, and one ‘oddball’ super-community of 155 individuals in a small, isolated area.

    “Among giraffes, adult females have enduring social relationships and form distinct and stable social communities with a relatively large number of other females, while, in their perpetual search for mating opportunities, adult males connect the adult female communities, forming super-communities,” said Monica Bond, a postdoctoral research associate at the University of Zurich and an author of the paper. “This type of complex society has evolutionary and conservation advantages, because the dynamics of the social system should allow gene flow between groups, which is an important part of maintaining a healthy and robust population.”

    The current research adds to a growing body of literature demonstrating that giraffes live in a socially structured society, despite the fact that herds have what researchers call “fission-fusion” dynamics, with the size and composition of the population constantly changing as animals move through the environment. Fission-fusion grouping dynamics are common among mammals, such as elephants, bats, some primates, and cetaceans, but, according to the researchers, this study is the first to demonstrate that giraffes reside in a complex society with dynamic herds embedded into stable communities within stable super-communities, all of which are driven by the variation in social connections among individuals.

    “The large scale of the study, in terms of the size of the landscape and the sheer number of animals, enabled us to uncover an upper apex level of social structure that was previously unknown,” said Lavista. “Using Microsoft’s AI tools allowed us to visualize and analyze a large volume data to gain meaningful insights about giraffes over the 5 years of study.”

    The researchers believe the complex nature of giraffe populations could impact conservation efforts for these endangered giraffes, including translocation efforts that move individuals to new areas. They caution that translocating a small number of individuals to new areas should be limited, because such invasive actions destabilize the intricate web of social relationships among giraffes.

    In addition to Lee, Lavista, and Bond, the research team includes Md Nasir and Avleen Bijral at the Microsoft AI for Good Research Lab; Yu-Chia Chen at the University of Washington, Seattle; and Fred Bercovitch at Kyoto University in Japan.

    The research was supported by the Sacramento Zoo, the Columbus Zoo and Aquarium, the Tulsa Zoo, Tierpark Berlin and Zoo Berlin, Zoo Miami, the Cincinnati Zoo and Botanical Garden, and Save the Giraffes

  • Massively Reducing Food Waste Could Feed the World
    28 September 2021
    It would also greatly cut greenhouse gas emissions
    -- Read more on ScientificAmerican.com
  • Bright lava flows, smoke pour from La Palma volcano eruption in new Landsat photos
    28 September 2021
    New satellite images of an active volcano on the Spanish island of La Palma capture vivid streams of lava pouring down the coastal mountain range and nearing the Atlantic Ocean.
  • The coevolution of particle physics and computing
    28 September 2021

    Over time, particle physics and astrophysics and computing have built upon one another’s successes. That coevolution continues today.

    In the mid-twentieth century, particle physicists were peering deeper into the history and makeup of the universe than ever before. Over time, their calculations became too complex to fit on a blackboard—or to farm out to armies of human “computers” doing calculations by hand. 

    To deal with this, they developed some of the world’s earliest electronic computers. 

    Physics has played an important role in the history of computing. The transistor—the switch that controls the flow of electrical signal within a computer—was invented by a group of physicists at Bell Labs. The incredible computational demands of particle physics and astrophysics experiments have consistently pushed the boundaries of what is possible. They have encouraged the development of new technologies to handle tasks from dealing with avalanches of data to simulating interactions on the scales of both the cosmos and the quantum realm. 

    But this influence doesn’t just go one way. Computing plays an essential role in particle physics and astrophysics as well. As computing has grown increasingly more sophisticated, its own progress has enabled new scientific discoveries and breakthroughs. 

    Illustration by Sandbox Studio, Chicago with Ariel Davis
    Managing an onslaught of data

    In 1973, scientists at Fermi National Accelerator Laboratory in Illinois got their first big mainframe computer: a 7-year-old hand-me-down from Lawrence Berkeley National Laboratory. Called the CDC 6600, it weighed about 6 tons. Over the next five years, Fermilab added five more large mainframe computers to its collection.

    Then came the completion of the Tevatron—at the time, the world’s highest-energy particle accelerator—which would provide the particle beams for numerous experiments at the lab. By the mid-1990s, two four-story particle detectors would begin selecting, storing and analyzing data from millions of particle collisions at the Tevatron per second. Called the Collider Detector at Fermilab and the DZero detector, these new experiments threatened to overpower the lab’s computational abilities.

    In December of 1983, a committee of physicists and computer scientists released a 103-page report highlighting the “urgent need for an upgrading of the laboratory’s computer facilities.” The report said the lab “should continue the process of catching up” in terms of computing ability, and that “this should remain the laboratory’s top computing priority for the next few years.”

    Instead of simply buying more large computers (which were incredibly expensive), the committee suggested a new approach: They recommended increasing computational power by distributing the burden over clusters or “farms” of hundreds of smaller computers. 

    Thanks to Intel’s 1971 development of a new commercially available microprocessor the size of a domino, computers were shrinking. Fermilab was one of the first national labs to try the concept of clustering these smaller computers together, treating each particle collision as a computationally independent event that could be analyzed on its own processor.  

    Like many new ideas in science, it wasn’t accepted without some pushback. 

    Joel Butler, a physicist at Fermilab who was on the computing committee, recalls, “There was a big fight about whether this was a good idea or a bad idea.”

    A lot of people were enchanted with the big computers, he says. They were impressive-looking and reliable, and people knew how to use them. And then along came “this swarm of little tiny devices, packaged in breadbox-sized enclosures.” 

    The computers were unfamiliar, and the companies building them weren’t well-established. On top of that, it wasn’t clear how well the clustering strategy would work. 

    As for Butler? “I raised my hand [at a meeting] and said, ‘Good idea’—and suddenly my entire career shifted from building detectors and beamlines to doing computing,” he chuckles. 

    Not long afterward, innovation that sparked for the benefit of particle physics enabled another leap in computing. In 1989, Tim Berners-Lee, a computer scientist at CERN, launched the World Wide Web to help CERN physicists share data with research collaborators all over the world. 

    To be clear, Berners-Lee didn’t create the internet—that was already underway in the form the ARPANET, developed by the US Department of Defense. But the ARPANET connected only a few hundred computers, and it was difficult to share information across machines with different operating systems. 

    The web Berners-Lee created was an application that ran on the internet, like email, and started as a collection of documents connected by hyperlinks. To get around the problem of accessing files between different types of computers, he developed HTML (HyperText Markup Language), a programming language that formatted and displayed files in a web browser independent of the local computer’s operating system. 

    Berners-Lee also developed the first web browser, allowing users to access files stored on the first web server (Berners-Lee’s computer at CERN). He implemented the concept of a URL (Uniform Resource Locator), specifying how and where to access desired web pages. 

    What started out as an internal project to help particle physicists share data within their institution fundamentally changed not just computing, but how most people experience the digital world today. 

    Back at Fermilab, cluster computing wound up working well for handling the Tevatron data. Eventually, it became industry standard for tech giants like Google and Amazon. 

    Over the next decade, other US national laboratories adopted the idea, too. SLAC National Accelerator Laboratory—then called Stanford Linear Accelerator Center—transitioned from big mainframes to clusters of smaller computers to prepare for its own extremely data-hungry experiment, BaBar. Both SLAC and Fermilab also were early adopters of Lee’s web server. The labs set up the first two websites in the United States, paving the way for this innovation to spread across the continent.

    In 1989, in recognition of the growing importance of computing in physics, Fermilab Director John Peoples elevated the computing department to a full-fledged division. The head of a division reports directly to the lab director, making it easier to get resources and set priorities. Physicist Tom Nash formed the new Computing Division, along with Butler and two other scientists, Irwin Gaines and Victoria White. Butler led the division from 1994 to 1998. 

    High-performance computing in particle physics and astrophysics

    These computational systems worked well for particle physicists for a long time, says Berkeley Lab astrophysicist Peter Nugent. That is, until Moore’s Law started grinding to a halt.  

    Moore’s Law is the idea that the number of transistors in a circuit will double, making computers faster and cheaper, every two years. The term was first coined in the mid-1970s, and the trend reliably proceeded for decades. But now, computer manufacturers are starting to hit the physical limit of how many tiny transistors they can cram onto a single microchip. 

    Because of this, says Nugent, particle physicists have been looking to take advantage of high-performance computing instead. 

    Nugent says high-performance computing is “something more than a cluster, or a cloud-computing environment that you could get from Google or AWS, or at your local university.” 

    What it typically means, he says, is that you have high-speed networking between computational nodes, allowing them to share information with each other very, very quickly. When you are computing on up to hundreds of thousands of nodes simultaneously, it massively speeds up the process. 

    On a single traditional computer, he says, 100 million CPU hours translates to more than 11,000 years of continuous calculations. But for scientists using a high-performance computing facility at Berkeley Lab, Argonne National Laboratory or Oak Ridge National Laboratory, 100 million hours is a typical, large allocation for one year at these facilities.

    Although astrophysicists have always relied on high-performance computing for simulating the birth of stars or modeling the evolution of the cosmos, Nugent says they are now using it for their data analysis as well. 

    This includes rapid image-processing computations that have enabled the observations of several supernovae, including SN 2011fe, captured just after it began. “We found it just a few hours after it exploded, all because we were able to run these pipelines so efficiently and quickly,” Nugent says. 

    According to Berkeley Lab physicist Paolo Calafiura, particle physicists also use high-performance computing for simulations—for modeling not the evolution of the cosmos, but rather what happens inside a particle detector. “Detector simulation is significantly the most computing-intensive problem that we have,” he says. 

    Scientists need to evaluate multiple possibilities for what can happen when particles collide. To properly correct for detector effects when analyzing particle detector experiments, they need to simulate more data than they collect. “If you collect 1 billion collision events a year,” Calafiura says, “you want to simulate 10 billion collision events.”

    Calafiura says that right now, he’s more worried about finding a way to store all of the simulated and actual detector data than he is about producing it, but he knows that won’t last. 

    “When does physics push computing?” he says. “When computing is not good enough… We see that in five years, computers will not be powerful enough for our problems, so we are pushing hard with some radically new ideas, and lots of detailed optimization work.”

    That’s why the Department of Energy’s Exascale Computing Project aims to build, in the next few years, computers capable of performing a quintillion (that is, a billion billion) operations per second. The new computers will be 1000 times faster than the current fastest computers. 

    The exascale computers will also be used for other applications ranging from precision medicine to climate modeling to national security.

    Machine learning and quantum computing

    Innovations in computer hardware have enabled astrophysicists to push the kinds of simulations and analyses they can do. For example, Nugent says, the introduction of graphics processing units has sped up astrophysicists’ ability to do calculations used in machine learning, leading to an explosive growth of machine learning in astrophysics. 

    With machine learning, which uses algorithms and statistics to identify patterns in data, astrophysicists can simulate entire universes in microseconds. 

    Machine learning has been important in particle physics as well, says Fermilab scientist Nhan Tran. “[Physicists] have very high-dimensional data, very complex data,” he says. “Machine learning is an optimal way to find interesting structures in that data.”

    The same way a computer can be trained to tell the difference between cats and dogs in pictures, it can learn how to identify particles from physics datasets, distinguishing between things like pions and photons. 

    Tran says using computation this way can accelerate discovery. “As physicists, we’ve been able to learn a lot about particle physics and nature using non-machine-learning algorithms,” he says. “But machine learning can drastically accelerate and augment that process—and potentially provide deeper insight into the data.” 

    And while teams of researchers are busy building exascale computers, others are hard at work trying to build another type of supercomputer: the quantum computer.

    Remember Moore’s Law? Previously, engineers were able to make computer chips faster by shrinking the size of electrical circuits, reducing the amount of time it takes for electrical signals to travel. “Now our technology is so good that literally the distance between transistors is the size of an atom,” Tran says. “So we can’t keep scaling down the technology and expect the same gains we’ve seen in the past."

    To get around this, some researchers are redefining how computation works at a fundamental level—like, really fundamental. 

    The basic unit of data in a classical computer is called a bit, which can hold one of two values: 1, if it has an electrical signal, or 0, if it has none. But in quantum computing, data is stored in quantum systems—things like electrons, which have either up or down spins, or photons, which are polarized either vertically or horizontally. These data units are called “qubits.”

    Here’s where it gets weird. Through a quantum property called superposition, qubits have more than just two possible states. An electron can be up, down, or in a variety of stages in between.

    What does this mean for computing? A collection of three classical bits can exist in only one of eight possible configurations: 000, 001, 010, 100, 011, 110, 101 or 111. But through superposition, three qubits can be in all eight of these configurations at once. A quantum computer can use that information to tackle problems that are impossible to solve with a classical computer.

    Fermilab scientist Aaron Chou likens quantum problem-solving to throwing a pebble into a pond. The ripples move through the water in every possible direction, “simultaneously exploring all of the possible things that it might encounter.” 

    In contrast, a classical computer can only move in one direction at a time. 

    But this makes quantum computers faster than classical computers only when it comes to solving certain types of problems. “It’s not like you can take any classical algorithm and put it on a quantum computer and make it better,” says University of California, Santa Barbara physicist John Martinis, who helped build Google’s quantum computer. 

    Although quantum computers work in a fundamentally different way than classical computers, designing and building them wouldn’t be possible without traditional computing laying the foundation, Martinis says. “We're really piggybacking on a lot of the technology of the last 50 years or more.”

    The kinds of problems that are well suited to quantum computing are intrinsically quantum mechanical in nature, says Chou.

    For instance, Martinis says, consider quantum chemistry. Solving quantum chemistry problems with classical computers is so difficult, he says, that 10 to 15% of the world’s supercomputer usage is currently dedicated to the task. “Quantum chemistry problems are hard for the very reason why a quantum computer is powerful”—because to complete them, you have to consider all the different quantum-mechanical states of all the individual atoms involved. 

    Because making better quantum computers would be so useful in physics research, and because building them requires skills and knowledge that physicists possess, physicists are ramping up their quantum efforts. In the United States, the National Quantum Initiative Act of 2018 called for the National Institute of Standards and Technology, the National Science Foundation and the Department of Energy to support programs, centers and consortia devoted to quantum information science. 

    Coevolution requires cooperation 

    In the early days of computational physics, the line between who was a particle physicist and who was a computer scientist could be fuzzy. Physicists used commercially available microprocessors to build custom computers for experiments. They also wrote much of their own software—ranging from printer drivers to the software that coordinated the analysis between the clustered computers. 

    Nowadays, roles have somewhat shifted. Most physicists use commercially available devices and software, allowing them to focus more on the physics, Butler says. But some people, like Anshu Dubey, work right at the intersection of the two fields. Dubey is a computational scientist at Argonne National Laboratory who works with computational physicists. 

    When a physicist needs to computationally interpret or model a phenomenon, sometimes they will sign up a student or postdoc in their research group for a programming course or two and then ask them to write the code to do the job. Although these codes are mathematically complex, Dubey says, they aren’t logically complex, making them relatively easy to write.  

    A simulation of a single physical phenomenon can be neatly packaged within fairly straightforward code. “But the real world doesn’t want to cooperate with you in terms of its modularity and encapsularity,” she says. 

    Multiple forces are always at play, so to accurately model real-world complexity, you have to use more complex software—ideally software that doesn’t become impossible to maintain as it gets updated over time. “All of a sudden,” says Dubey, “you start to require people who are creative in their own right—in terms of being able to architect software.” 

    That’s where people like Dubey come in. At Argonne, Dubey develops software that researchers use to model complex multi-physics systems—incorporating processes like fluid dynamics, radiation transfer and nuclear burning.

    Hiring computer scientists for research projects in physics and other fields of science can be a challenge, Dubey says. Most funding agencies specify that research money can be used for hiring students and postdocs, but not paying for software development or hiring dedicated engineers. “There is no viable career path in academia for people whose careers are like mine,” she says.

    In an ideal world, universities would establish endowed positions for a team of research software engineers in physics departments with a nontrivial amount of computational research, Dubey says. These engineers would write reliable, well-architected code, and their institutional knowledge would stay with a team. 

    Physics and computing have been closely intertwined for decades. However the two develop—toward new analyses using artificial intelligence, for example, or toward the creation of better and better quantum computers—it seems they will remain on this path together.

  • Unbreakable glass inspired by seashells
    28 September 2021
    Scientists from McGill University develop stronger and tougher glass, inspired by the inner layer of mollusk shells. Instead of shattering upon impact, the new material has the resiliency of plastic and could be used to improve cell phone screens in the future, among other applications.

Science News

Science News Websites

28 September 2021

Science News Websites
  • Hubble Measures Horizontal Winds in Jupiter’s Great Red Spot
    28 September 2021
    Using data from the WFC3/UVIS instrument on board the NASA/ESA Hubble Space Telescope, a team of astronomers has measured the horizontal winds in Jupiter’s most distinctive feature, the Great Red Spot. By analyzing the long-term data from the boundaries of the giant storm, known as the high-speed ring, they’ve found that the wind speed has [...]
  • Massively Reducing Food Waste Could Feed the World
    28 September 2021
    It would also greatly cut greenhouse gas emissions
    -- Read more on ScientificAmerican.com
  • Tweaking alloy microchemistry for flawless metal 3D printing
    28 September 2021
    In the last few decades, metal 3D printing has spearheaded the efforts in creating custom parts of intricate shapes and high functionality. But as additive manufacturers have included more alloys for their 3D printing needs, so have the challenges in creating uniform, defect-free parts.
  • Deadly auto crashes more likely during pandemic lockdown
    28 September 2021
    With fewer people on the road during the early days of the pandemic, more drivers were speeding and driving recklessly, resulting in more crashes being deadly, a new study found.
  • Social distancing measures in the spring of 2020 effectively curbed the COVID-19 pandemic in Germany, study finds
    28 September 2021
    Early contact restrictions and school closures prevented over 80 per cent of COVID-19 infections and over 60 per cent of deaths in Germany within three weeks, a new study finds.
  • Theory of Mind: Children Do Not Understand Concept of Others Having False Beliefs Until Age 6 or 7
    28 September 2021

    New developmental psychology work has upended decades of research suggesting that children as young as 4 years old possess theory of mind. Having theory of...

    The post Theory of Mind: Children Do Not Understand Concept of Others Having False Beliefs Until Age 6 or 7 appeared first on SciTechDaily.

  • Inside the Nail-biting Quest to Find the 'Loneliest Whale'
    28 September 2021
    It’s a tale of sound; the song of a solitary whale that vocalizes at a unique frequency, 52 Hertz, that no other whale—as the story goes—can seemingly understand.  It’s...
    -- Read more on ScientificAmerican.com
  • Risk of airborne transmission of avian influenza from wild waterfowl to poultry negligible
    28 September 2021
    Research by Wageningen Bioveterinary Research (WBVR) has shown that the risk of airborne transmission of high pathogenic avian influenza virus from infected wild birds is negligible. The research looked specifically at the airborne movement of particles from wild waterfowl droppings in the vicinity of poultry farms during the risk season for avian influenza (October to March). It also considered transmission via aerosolization, with the exhalations or coughs of wild waterfowl infected with avian influenza virus finding their way into the ventilation systems of poultry farms. As a precaution, it's important that the carcasses of wild waterfowl or other wild birds that have died of high pathogenic avian influenza are removed from their habitat as soon as possible. If not, scavengers eating the carcasses could cause feathers to become distributed. Feathers of wild birds that died of, and if the wild bird died of high pathogenic avian influenza contain the virus, which can then the virus can survive for a long time in those feathers.
  • Optical chip protects quantum technology from errors
    28 September 2021
    In today's digital infrastructure, the data-bits we use to send and process information can either be 0 or 1. Being able to correct possible errors that may occur in computations using these bits is a vital part of information processing and communication systems. But a quantum computer uses quantum bits, which can be a kind of mixture of 0 and 1, known as quantum super-position. This mixture is vital to their power—but it makes error correction far more complicated.
  • Unbreakable glass inspired by seashells
    28 September 2021
    Scientists from McGill University develop stronger and tougher glass, inspired by the inner layer of mollusk shells. Instead of shattering upon impact, the new material has the resiliency of plastic and could be used to improve cell phone screens in the future, among other applications.

Health Science blogs

Science Blogs

28 September 2021

Science Blogs
  • Ancient disaster destroyed Biblical city Tall el-Hammam
    28 September 2021

    In the Middle Bronze Age (about 3600 years ago or roughly 1650 BCE), the city of Tall el-Hammam was ascendant. Located on high ground in the southern Jordan Valley, northeast of the Dead Sea, the settlement in its time had become the largest continuously occupied Bronze Age city in the southern Levant, having hosted early civilization for a few thousand years. At that time, it was 10 times larger than Jerusalem and 5 times larger than Jericho.

    “It’s an incredibly culturally important area,” said James Kennett, emeritus professor of earth science at the UC Santa Barbara. “Much of where the early cultural complexity of humans developed is in this general area.”

    A favorite site for archaeologists and biblical scholars, the mound hosts evidence of culture all the way from the Chalcolithic, or Copper Age, all compacted into layers as the highly strategic settlement was built, destroyed, and rebuilt over millennia.

    But there is a 1.5-meter interval in the Middle Bronze Age II stratum that caught the interest of some researchers, for its “highly unusual” materials. In addition to the debris one would expect from destruction via warfare and earthquakes, they found pottery shards with outer surfaces melted into glass, “bubbled” mudbrick, and partially melted building material, all indications of an anomalously high-temperature event, much hotter than anything the technology of the time could produce.

    “We saw evidence for temperatures greater than 2,000 degrees Celsius,” said Kennett, whose research group at the time happened to have been building the case for an older cosmic airburst about 12,800 years ago that triggered major widespread burning, climatic changes and animal extinctions. The charred and melted materials at Tall el-Hammam looked familiar, and a group of researchers including impact scientist Allen West and Kennett joined Trinity Southwest University biblical scholar Philip J. Silvia’s research effort to determine what happened at this city 3,650 years ago.

    Their results are published in the journal Nature Scientific Reports.

    Salt and Bone
    “There’s evidence of a large cosmic airburst, close to this city called Tall el-Hammam,” Kennett said, of an explosion similar to the Tunguska Event, a roughly 12-megaton airburst that occurred in 1908, when a 56-60-meter meteor pierced the Earth’s atmosphere over the Eastern Siberian Taiga.

    The shock of the explosion over Tall el-Hammam was enough to level the city, flattening the palace and surrounding walls and mudbrick structures, according to the paper, and the distribution of bones indicated “extreme disarticulation and skeletal fragmentation in nearby humans.”

    For Kennett, further proof of the airburst was found by conducting many different kinds of analyses on soil and sediments from the critical layer. Tiny iron- and silica-rich spherules turned up in their analysis, as did melted metals.

    “I think one of the main discoveries is shocked quartz. These are sand grains containing cracks that form only under very high pressure” Kennett said of one of many lines of evidence that point to a large airburst near Tall el-Hammam. “We have shocked quartz from this layer, and that means there were incredible pressures involved to shock the quartz crystals— quartz is one of the hardest minerals; it’s very hard to shock.”

    The airburst, according to the paper, may also explain the “anomalously high concentrations of salt” found in the destruction layer — an average of 4% in the sediment and as high as 25% in some samples.

    “The salt was thrown up due to the high impact pressures,” Kennett said, of the meteor that likely fragmented upon contact with the Earth’s atmosphere. “And it may be that the impact partially hit the Dead Sea, which is rich in salt.” The local shores of the Dead Sea are also salt-rich so the impact may have redistributed those salt crystals far and wide — not just at Tall el-Hammam, but also nearby Tell es-Sultan (proposed as the biblical Jericho, which also underwent violent destruction at the same time) and Tall-Nimrin (also then destroyed).

    The high-salinity soil could have been responsible for the so-called “Late Bronze Age Gap,” the researchers say, in which cities along the lower Jordan Valley were abandoned, dropping the population from tens of thousands to maybe a few hundred nomads. Nothing could grow in these formerly fertile grounds, forcing people to leave the area for centuries. Evidence for resettlement of Tall el-Hammam and nearby communities appears again in the Iron Age, roughly 600 years after the cities’ sudden devastation in the Bronze Age.

    Fire and Brimstone
    Tall el-Hamman has been the focus of an ongoing debate as to whether it could be the biblical city of Sodom, one of the two cities in the Old Testament Book of Genesis that were destroyed by God for how wicked they and their inhabitants had become. One denizen, Lot, is saved by two angels who instruct him not to look behind as they flee. Lot’s wife, however, lingers and is turned into a pillar of salt. Meanwhile, fire and brimstone fell from the sky; multiple cities were destroyed; thick smoke rose from the fires; city inhabitants were killed and area crops were destroyed in what sounds like an eyewitness account of a cosmic impact event. It’s a satisfying connection to make.

    “All the observations stated in Genesis are consistent with a cosmic airburst,” Kennett said, “but there’s no scientific proof that this destroyed city is indeed the Sodom of the Old Testament.” However, the researchers said, the disaster could have generated an oral tradition that may have served as the inspiration for the written account in the book of Genesis, as well as the biblical account of the burning of Jericho in the Old Testament Book of Joshua.

  • Happiness in Early Adulthood May Protect Against Dementia
    28 September 2021

    While research has shown that poor cardiovascular health can damage blood flow to the brain increasing the risk for dementia, a new study led by UC San Francisco indicates that poor mental health may also take its toll on cognition.

    The research adds to a body of evidence that links depression with dementia, but while most studies have pointed to its association in later life, the UCSF study shows that depression in early adulthood may lead to lower cognition 10 years later and to cognitive decline in old age.

    The study publishes in the Journal of Alzheimer’s Disease on Sept. 28, 2021.

    The researchers used innovative statistical methods to predict average trajectories of depressive symptoms for approximately 15,000 participants ages 20 to 89, divided into three life stages: older, midlife and young adulthood. They then applied these predicted trajectories and found that in a group of approximately 6,000 older participants, the odds of cognitive impairment were 73 percent higher for those estimated to have elevated depressive symptoms in early adulthood, and 43 percent higher for those estimated to have elevated depressive symptoms in later life.

    These results were adjusted for depressive symptoms in other life stages and for differences in age, sex, race, educational attainment, body mass index, history of diabetes and smoking status. For depressive symptoms in midlife, the researchers found an association with cognitive impairment, but this was discounted when they adjusted for depression in other life stages.

    Excess Stress Hormones May Damage Ability to Make New Memories

    “Several mechanisms explain how depression might increase dementia risk,” said first author Willa Brenowitz, PhD, MPH, of the UCSF Department of Psychiatry and Behavioral Sciences and the UCSF Weill Institute for Neurosciences. “Among them is that hyperactivity of the central stress response system increases production of the stress hormones glucocorticoids, leading to damage of the hippocampus, the part of the brain essential for forming, organizing and storing new memories.”

    Other studies have linked depression with atrophy of the hippocampus, and one study has shown faster rates of volume loss in women, she said.

    In estimating the depressive symptoms across each life stage, researchers pooled data from younger participants with data from the approximately 6,000 older participants and predicted average trajectories. These participants, whose average age was 72 at the start of the study and lived at home, had been enrolled by the Health Aging and Body Composition Study and the Cardiovascular Health Study. They were followed annually or semi-annually for up to 11 years.

    U-Shaped Curve Adds Credence to Predicted Trajectories

    While assumed values were used, the authors stated, no longitudinal studies have been completed across the life course. “Imputed depressive symptom trajectories fit a U-shaped curve, similar to age-related trends in other research,” they noted.

    Participants were screened for depression using a tool called the CESD-10, a 10-item questionnaire assessing symptoms in the past week. Moderate or high depressive symptoms were found in 13 percent of young adults, 26 percent of midlife adults and 34 percent of older participants.

    Some 1,277 participants were diagnosed with cognitive impairment following neuropsychological testing, evidence of global decline, documented use of a dementia medication or hospitalization with dementia as a primary or secondary diagnosis.

    “Generally, we found that the greater the depressive symptoms, the lower the cognition and the faster the rates of decline,” said Brenowitz, who is also affiliated with the UCSF Department of Epidemiology and Biostatistics. “Older adults estimated to have moderate or high depressive symptoms in early adulthood were found to experience a drop in cognition over 10 years.”

    With up to 20 percent of the population suffering from depression during their lifetime, it’s important to recognize its role in cognitive aging, said senior author Kristine Yaffe, MD, of the UCSF departments of Psychiatry and Behavioral Sciences, and Epidemiology and Biostatistics. “Future work will be needed to confirm these findings, but in the meantime, we should screen and treat depression for many reasons.”

    Co-Authors: Eric Vittinghoff, PhD, from UCSF; Adina Zeki Al Hazzouri, PhD, from Columbia University; Sherita H. Golden, MD, from Johns Hopkins University School of Medicine; and Annette L. Fitzpatrick, PhD, from University of Washington.

    Funding: National Institutes of Health and National Institute on Aging (1RF1AG054443).

  • Humans may have hatched and raised deadly cassowary chicks
    28 September 2021

    As early as 18,000 years ago, humans in New Guinea may have collected cassowary eggs near maturity and then raised the birds to adulthood, according to an international team of scientists, who used eggshells to determine the developmental stage of the ancient embryos/chicks when the eggs cracked.

    “This behavior that we are seeing is coming thousands of years before domestication of the chicken,” said Kristina Douglass, assistant professor of anthropology and African studies, Penn State. “And this is not some small fowl, it is a huge, ornery, flightless bird that can eviscerate you. Most likely the dwarf variety that weighs 20 kilos (44 pounds).”

    The researchers report today (Sept. 27) in the Proceedings of the National Academy of Sciences that “the data presented here may represent the earliest indication of human management of the breeding of an avian taxon anywhere in the world, preceding the early domestication of chicken and geese by several millennia.”

    Cassowaries are not chickens; in fact, they bear more resemblance to velociraptors than most domesticated birds. “However, cassowary chicks imprint readily to humans and are easy to maintain and raise up to adult size,” the researchers report. Imprinting occurs when a newly hatched bird decides that the first thing it sees is its mother. If that first glance happens to catch sight of a human, the bird will follow the human anywhere.

    According to the researchers, cassowary chicks are still traded as a commodity in New Guinea.

    Importance of eggshells

    Eggshells are part of the assemblage of many archeological sites, but according to Douglass, archaeologists do not often study them. The researchers developed a new method to determine how old a chick embryo was when an egg was harvested. They reported this work in a recent issue of the Journal of Archaeological Science.

    “I’ve worked on eggshells from archaeological sites for many years,” said Douglass. “I discovered research on turkey eggshells that showed changes in the eggshells over the course of development that were an indication of age. I decided this would be a useful approach.”

    The age assignment of the embryos/chicks depends on the 3-dimensional features of the inside of the shell. To develop the method needed to determine the eggs’ developmental age when the shells broke, the researchers used ostrich eggs from a study done to improve ostrich reproduction.  Researchers at the Oudtshoorn Research Farm, part of the Western Cape Government of South Africa, harvested three eggs every day of incubation for 42 days for their study and supplied Douglass and her team with samples from 126 ostrich eggs.

    They took four samples from each of these eggs for a total of 504 shell samples, each having a specific age. They created high-resolution, 3D images of the shell samples. By inspecting the inside of these eggs, the researcher created a statistical assessment of what the eggs looked like during stages of incubation. The researchers then tested their model with modern ostrich and emu eggs of known age.

    The insides of the eggshells change through development because the developing chicks get calcium from the eggshell. Pits begin to appear in the middle of development.

    “It is time dependent, but a little more complicated,” said Douglass. “We used a combination of 3D imaging, modeling and morphological descriptions.”

    The researchers then turned to legacy shell collections from two sites in New Guinea — Yuku and Kiowa. They applied their approach to more than 1,000 fragments of these 18,000- to 6,000-year-old eggs.

    “What we found was that a large majority of the eggshells were harvested during late stages,” said Douglass. “The eggshells look very late; the pattern is not random. They were either into eating baluts or they are hatching chicks.”

    A balut is a nearly developed embryo chick usually boiled and eaten as street food in parts of Asia.

    The original archaeologists found no indication of penning for the cassowaries. The few cassowary bones found at sites are only those of the meaty portions — leg and thigh — suggesting these were hunted birds, processed in the wild and only the meatiest parts got hauled home.

    “We also looked at burning on the eggshells,” said Douglass. “There are enough samples of late stage eggshells that do not show burning that we can say they were hatching and not eating them.”

    To successfully hatch and raise cassowary chicks, the people would need to know where the nests were, know when the eggs were laid and remove them from the nest just before hatching. Back in the late Pleistocene, according to Douglass, humans were purposefully collecting these eggs and this study suggests people were not just harvesting eggs to eat the contents.

    Also working on this project from Penn State were Priyangi Bulathsinhala, assistant teaching professor of statistics; Tim Tighe, assistant research professor, Materials Research Institute; and Andrew L. Mack, grants and contract coordinator, Penn State Altoona.

    Others working on the project include Dylan Gaffney, graduate student, University of Cambridge, U.K.; Theresa J. Feo, senior science officer, California Council of Science and Technology; and Megan Spitzer, research assistant; Scott Whittaker, manager, scientific imaging; Helen James, research zoologist and curator of birds; and Torben Rick, curator of North American Archaeology, all at the Natural Museum of Natural History, Smithsonian Institution. Glenn R. Summerhayes, professor of archaeology, University of Otago, New Zealand; and Zanell Brand, production scientist, Oudtshoorn Research Farm, Elsenburg, Department of Agriculture, Western Cape Government, South Africa, also worked on the project.

    The Smithsonian National Museum of Natural History, the National Science Foundation and Penn State’s College of the Liberal Arts supported this work.

  • A new approach to the data-deletion conundrum
    28 September 2021

    Rising consumer concern over data privacy has led to a rush of “right to be forgotten” laws around the world that allow individuals to request their personal data be expunged from massive databases that catalog our increasingly online lives. Researchers in artificial intelligence have observed that user data does not only exist in its raw form in a database, it is also implicitly contained in models trained on that data. So far, they have struggled to find methods for deleting these “traces” of users efficiently. The more complex the model is, the more challenging it becomes to delete data.

    “The exact deletion of data — the ideal — is hard to do in real time,” says James Zou, a professor of biomedical data science at Stanford University and an expert in artificial intelligence. “In training our machine learning models, bits and pieces of data can get embedded in the model in complicated ways. That makes it hard for us to guarantee a user has truly been forgotten without altering our models substantially.”

    Zou is senior author of a paper recently presented at the International Conference on Artificial Intelligence and Statistics (AISTATS) that may provide a possible answer to the data deletion problem that works for privacy-concerned individuals and artificial intelligence experts alike. They call it approximate deletion.

    Read the study: Approximate Data Deletion from Machine Learning Models

    “Approximate deletion, as the name suggests, allows us to remove most of the users’ implicit data from the model. They are ‘forgotten,’ but in such a way that we can do the retraining of our models at a later, more opportune time,” says Zach Izzo, a graduate student in mathematics and the first author of the AISTATS paper.

    Approximate deletion is especially useful in quickly removing sensitive information or features unique to a given individual that could potentially be used for identification after the fact, while postponing the computationally intensive full model retraining to times of lower computational demand. Under certain assumptions, Zou says, approximate deletion even achieves the holy grail of exact deletion of a user’s implicit data from the trained model.

    Driven by Data

    Machine learning works by combing databases and applying various predictive weights to features in the data — an online shopper’s age, location, and previous purchase history, for instance, or a streamer’s past viewing history and personal ratings of movies  watched. The models are not confined to commercial applications and are now widely used in radiology, pathology, and other fields of direct human impact.

    In theory, information in a database is anonymized, but users concerned about privacy fear that they can still be identified by the bits and pieces of information about them that are still wedged in the models, begetting the need for right to be forgotten laws.

    The gold standard in the field, Izzo says, is to find the exact same model as if the machine learning had never seen the deleted data points in the first place. That standard, known as “exact deletion,” is hard if not impossible to achieve, especially with large, complicated models like those that recommend products or movies to online shoppers and streamers. Exact data deletion effectively means retraining a model from scratch, Izzo says.

    “Doing that requires taking the algorithm offline for retraining. And that costs real money and real time,” he says.

    What is Approximate Deletion?

    In solving the deletion conundrum, Zou and Izzo have come at things slightly differently than their counterparts in the field. In effect, they create synthetic data to replace — or, more accurately, negate — that of the individual who wishes to be forgotten.

    This temporary solution satisfies the privacy-minded individual’s immediate desire to not be identified from data in the model — that is, to be forgotten — while reassuring the computer scientists, and the businesses that rely upon them, that their models will work as planned, at least until a more opportune time when the model can be retrained at lower cost.

    There is a philosophical aspect to the challenge, the authors say. Where privacy, law, and commerce intersect, the discussion begins with a meaningful definition of what it means to “delete” information. Does deletion mean the actual destruction of data? Or is it enough to ensure that no one could ever identify an anonymous person from it? In the end, Izzo says, answering that key question requires balancing the privacy rights of consumers and the needs of science and commerce.

    “That’s a pretty difficult, non-trivial question,” Izzo says. “For many of the more complicated models used in practice, even if you delete zero people from a database, retraining alone can result in a completely different model. So even defining the proper target for the retrained model is challenging.”

    With their approximate deletion approach in hand, the authors then validated the effectiveness of their method empirically, confirming their theoretical approach on the path to practical application. That critical step now becomes the goal of future work.

    “We think approximate deletion is an important initial step toward solving what has been a difficult challenge for AI,” Zou says.

    Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

  • Antimicrobial Coating for Orthopedic Implants Prevents Dangerous Infections
    28 September 2021

    Biomedical engineers and surgeons at Duke University and UCLA have demonstrated an antibiotic coating that can be applied to orthopedic implants minutes before surgery that eliminates the chances of an infection around the implant.

    In early trials in mice, the coating prevented all subsequent infections, even without infusions of antibiotics into the bloodstream, which is the current standard of care. After 20 days, the coating did not reduce the bone’s ability to fuse with the implant and was completely absorbed by the body.

    The results appear online September 16 in the journal Nature Communications.

    The project began when Tatiana Segura, professor of biomedical engineering at Duke, met Nicholas Bernthal, interim chair and executive medical director at the David Geffen School of Medicine at UCLA, who specializes in pediatric orthopedic oncology and surgery. He told Segura that many children being treated for bone cancer have large portions of bone removed, which then requires orthopedic implants. But because the patients are usually also undergoing chemotherapy, their immune systems are weak and they are especially vulnerable to bacteria colonizing the surface of the implant.

    “These kids face the choice of having chemotherapy versus saving their limb or even sometimes needing amputations to survive, which sounds horrific to me,” Segura said. “All they really need is something to rub on the implant to stop an infection from taking hold, because preventing an infection is much easier than treating one. So we came up with this coating technology that we hope will provide a solution.”

    Implant infections aren’t unique to children or to cancer patients, however. For joint replacement surgeries, for example, infection occurs in 1% of primary and up to 7% of revision surgeries, which requires repeated revision surgeries and prolonged intravenous antibiotics. Treatment doesn’t always work, however, as these patients have a higher five-year mortality risk than those diagnosed with HIV/AIDS or breast cancer. Implant infections are estimated to cost the health care system more than $8.6 billion annually in the U.S. alone.

    Part of the challenge of treating these infections is that bacteria colonize the surface of the implants themselves. This means that there are no blood vessels flowing through the bacterial colonies to deliver the antibiotics coursing through a patient’s veins. The only recourse is often the removal of the original implant, which is usually the best of what are only bad options.

    Some doctors have taken to their own solutions, such as using antibiotic powder when closing the surgical wound or infusing the bone cement used to hold the implant in place with antibiotics. Neither of these tactics have been proven to be clinically effective. There is also the option of implant manufacturers adding antibiotic properties to their devices. But this would greatly reduce the product’s shelf life and also require a long and complicated process of FDA approval, since the implants would then be in a new classification.

    Segura’s new antibiotic coating sidesteps all of these challenges.

    “We’ve shown that a point-of-care, antibiotic-releasing coating protects implants from bacterial challenge, and can be quickly and safely applied in the operating room without the need to modify existing implants,” said Christopher Hart, a resident physician in UCLA Orthopaedic Surgery who helped conduct the experiments.

    The new antimicrobial coating is made of two polymers, one that repels water and one that mixes well with water. Both are combined in a solution with an antibiotic of the physician’s choosing and then applied directly to the orthopedic implant by dipping, painting or spraying. When exposed to a bright ultraviolet light, the two polymers couple together and self-assemble into a grid-like structure that traps the antibiotics.

    The reaction is an example of “click chemistry,” which is a general way of describing reactions that happen quickly at room temperature, produce only a single reaction product, have an extremely high yield and occur within a single container.

    “This study is a great example of the power of click chemistry in biomedical applications,” said Weixian Xi, now a senior scientist at Illumina who was a postdoctoral researcher at UCLA during the study. “This ‘smart’ and ‘clickable’ polymeric coating enables protections of implants from bacterial infection and makes a personalized approach possible.”

    “Our coating can be personalizable because it can use almost any antibiotic,” Segura continued. “The antibiotic can be chosen by the physician based on where in the body the device is being implanted and what pathogens are common in whatever part of the world the surgery is taking place.”

    The click chemistry polymer grid also has an affinity for metal. Tests involving various types of implants showed that the coating was very difficult to rub off during surgical procedures. Once inside the body, however, the conditions cause the polymer to degrade, slowly releasing the antibiotics over the course of two to three weeks.

    In the study, researchers rigorously tested the coating in mice with either leg or spine implants. After 20 days, the coating did not inhibit the bone’s growth into the implant and prevented 100% of infections. This time period, the researchers say, is long enough to prevent the vast majority of these types of infections from occurring.

    The researchers have not yet tested their coating on larger animals. Since larger animals—such as humans—have larger bones and need larger implants, there is much more surface area to protect against bacterial infections. But the researchers are confident that their invention is up to the task and plan to pursue the steps needed to commercialize the product.

    “We believe this transdisciplinary work represents the future of surgical implants, providing a point of application coating that transforms the implant from a hotspot for infection into a ‘smart’ antimicrobial therapeutic,” Bernthal said. “You only need to treat a single patient with an infected implant to realize how transformational this could be for patient care —saving both life and limbs for many.”

    This research was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health (5K08AR069112-01, T32AR059033).

    “Point-Of-Care Antimicrobial Coating Protects Orthopaedic Implants From Bacterial Challenge.” Weixian Xi, Vishal Hegde, Stephen D. Zoller, Howard Y. Park, Christopher M. Hart, Takeru Kondo, Christopher D. Hamad, Yan Hu, Amanda H. Loftin, Daniel O. Johansen, Zachary Burke, Samuel Clarkson, Chad Ishmael, Kellyn Hori, Zeinab Mamouei, Hiroko Okawa, Ichiro Nishimura, Nicholas M. Bernthal and Tatiana Segura. Nature Communications, Sept. 17, 2021. DOI: 10.1038/s41467-021-25383-z

  • Male giraffes are more socially connected than females
    28 September 2021

    Although female giraffes have closer “friends” than male giraffes, male giraffes have more “acquaintances” than females, according to a new study by an international team that includes a Penn State biologist. The study demonstrates that giraffes form a complex multilevel society that is driven by differences in the social connections among individuals, which could have conservation implications for the endangered giraffes.

    “The degree to which an animal is connected to others in its social network influences reproductive success and population ecology, spread of information, and even how diseases move through a population,” said Derek Lee, associate research professor at Penn State and an author of the paper. “Information about sociality therefore can provide important guidance for conservation.”

    The research team examined social connectedness and social movements of endangered Masai giraffes in the Tarangire Ecosystem of northern Tanzania using data collected over 5 years. The work, led by Juan Lavista Ferres of the Microsoft AI for Good Research Lab, involved constructing the social network of more than 1,000 free-ranging giraffes. The team presents their results in a paper appearing Sept. 27 in the journal Animal Behaviour.

    “We found that male giraffes overall had higher social connectedness than females, which means males interact with greater numbers of other individuals than females,” said Lee. “Older males had the shortest social path length to all the other giraffes in the network. This might reflect the mating strategy of males, who roam widely across the landscape searching for females to mate with and make connections in the process. Young males had the most social ties and moved most often among groups, reflecting social exploration as they prepare to disperse away from their mothers.”

    According to the study, adult female giraffes tend to have fewer but stronger relationships with each other than males and younger females, a trend that has also been observed in giraffe populations elsewhere in Africa. The researchers previously found that relationships among female giraffes allow them to live longer.

    The results reveal an additional layer of complexity to giraffe societies beyond what was seen in earlier research. Previous research showed that adult females in this population have formed about a dozen distinct groups, or communities, of 60 to 90 individuals that tend to associate more with each other than with members of the other groups, even when the groups use the same spaces. The current study builds on this knowledge and found that the full population, including calves and adult males, has a more complex structure: The female communities are embedded within three socially distinct larger groups called ‘super-communities’ of between 800 to 900 individuals, and one ‘oddball’ super-community of 155 individuals in a small, isolated area.

    “Among giraffes, adult females have enduring social relationships and form distinct and stable social communities with a relatively large number of other females, while, in their perpetual search for mating opportunities, adult males connect the adult female communities, forming super-communities,” said Monica Bond, a postdoctoral research associate at the University of Zurich and an author of the paper. “This type of complex society has evolutionary and conservation advantages, because the dynamics of the social system should allow gene flow between groups, which is an important part of maintaining a healthy and robust population.”

    The current research adds to a growing body of literature demonstrating that giraffes live in a socially structured society, despite the fact that herds have what researchers call “fission-fusion” dynamics, with the size and composition of the population constantly changing as animals move through the environment. Fission-fusion grouping dynamics are common among mammals, such as elephants, bats, some primates, and cetaceans, but, according to the researchers, this study is the first to demonstrate that giraffes reside in a complex society with dynamic herds embedded into stable communities within stable super-communities, all of which are driven by the variation in social connections among individuals.

    “The large scale of the study, in terms of the size of the landscape and the sheer number of animals, enabled us to uncover an upper apex level of social structure that was previously unknown,” said Lavista. “Using Microsoft’s AI tools allowed us to visualize and analyze a large volume data to gain meaningful insights about giraffes over the 5 years of study.”

    The researchers believe the complex nature of giraffe populations could impact conservation efforts for these endangered giraffes, including translocation efforts that move individuals to new areas. They caution that translocating a small number of individuals to new areas should be limited, because such invasive actions destabilize the intricate web of social relationships among giraffes.

    In addition to Lee, Lavista, and Bond, the research team includes Md Nasir and Avleen Bijral at the Microsoft AI for Good Research Lab; Yu-Chia Chen at the University of Washington, Seattle; and Fred Bercovitch at Kyoto University in Japan.

    The research was supported by the Sacramento Zoo, the Columbus Zoo and Aquarium, the Tulsa Zoo, Tierpark Berlin and Zoo Berlin, Zoo Miami, the Cincinnati Zoo and Botanical Garden, and Save the Giraffes

  • Massively Reducing Food Waste Could Feed the World
    28 September 2021
    It would also greatly cut greenhouse gas emissions
    -- Read more on ScientificAmerican.com
  • Bright lava flows, smoke pour from La Palma volcano eruption in new Landsat photos
    28 September 2021
    New satellite images of an active volcano on the Spanish island of La Palma capture vivid streams of lava pouring down the coastal mountain range and nearing the Atlantic Ocean.
  • The coevolution of particle physics and computing
    28 September 2021

    Over time, particle physics and astrophysics and computing have built upon one another’s successes. That coevolution continues today.

    In the mid-twentieth century, particle physicists were peering deeper into the history and makeup of the universe than ever before. Over time, their calculations became too complex to fit on a blackboard—or to farm out to armies of human “computers” doing calculations by hand. 

    To deal with this, they developed some of the world’s earliest electronic computers. 

    Physics has played an important role in the history of computing. The transistor—the switch that controls the flow of electrical signal within a computer—was invented by a group of physicists at Bell Labs. The incredible computational demands of particle physics and astrophysics experiments have consistently pushed the boundaries of what is possible. They have encouraged the development of new technologies to handle tasks from dealing with avalanches of data to simulating interactions on the scales of both the cosmos and the quantum realm. 

    But this influence doesn’t just go one way. Computing plays an essential role in particle physics and astrophysics as well. As computing has grown increasingly more sophisticated, its own progress has enabled new scientific discoveries and breakthroughs. 

    Illustration by Sandbox Studio, Chicago with Ariel Davis
    Managing an onslaught of data

    In 1973, scientists at Fermi National Accelerator Laboratory in Illinois got their first big mainframe computer: a 7-year-old hand-me-down from Lawrence Berkeley National Laboratory. Called the CDC 6600, it weighed about 6 tons. Over the next five years, Fermilab added five more large mainframe computers to its collection.

    Then came the completion of the Tevatron—at the time, the world’s highest-energy particle accelerator—which would provide the particle beams for numerous experiments at the lab. By the mid-1990s, two four-story particle detectors would begin selecting, storing and analyzing data from millions of particle collisions at the Tevatron per second. Called the Collider Detector at Fermilab and the DZero detector, these new experiments threatened to overpower the lab’s computational abilities.

    In December of 1983, a committee of physicists and computer scientists released a 103-page report highlighting the “urgent need for an upgrading of the laboratory’s computer facilities.” The report said the lab “should continue the process of catching up” in terms of computing ability, and that “this should remain the laboratory’s top computing priority for the next few years.”

    Instead of simply buying more large computers (which were incredibly expensive), the committee suggested a new approach: They recommended increasing computational power by distributing the burden over clusters or “farms” of hundreds of smaller computers. 

    Thanks to Intel’s 1971 development of a new commercially available microprocessor the size of a domino, computers were shrinking. Fermilab was one of the first national labs to try the concept of clustering these smaller computers together, treating each particle collision as a computationally independent event that could be analyzed on its own processor.  

    Like many new ideas in science, it wasn’t accepted without some pushback. 

    Joel Butler, a physicist at Fermilab who was on the computing committee, recalls, “There was a big fight about whether this was a good idea or a bad idea.”

    A lot of people were enchanted with the big computers, he says. They were impressive-looking and reliable, and people knew how to use them. And then along came “this swarm of little tiny devices, packaged in breadbox-sized enclosures.” 

    The computers were unfamiliar, and the companies building them weren’t well-established. On top of that, it wasn’t clear how well the clustering strategy would work. 

    As for Butler? “I raised my hand [at a meeting] and said, ‘Good idea’—and suddenly my entire career shifted from building detectors and beamlines to doing computing,” he chuckles. 

    Not long afterward, innovation that sparked for the benefit of particle physics enabled another leap in computing. In 1989, Tim Berners-Lee, a computer scientist at CERN, launched the World Wide Web to help CERN physicists share data with research collaborators all over the world. 

    To be clear, Berners-Lee didn’t create the internet—that was already underway in the form the ARPANET, developed by the US Department of Defense. But the ARPANET connected only a few hundred computers, and it was difficult to share information across machines with different operating systems. 

    The web Berners-Lee created was an application that ran on the internet, like email, and started as a collection of documents connected by hyperlinks. To get around the problem of accessing files between different types of computers, he developed HTML (HyperText Markup Language), a programming language that formatted and displayed files in a web browser independent of the local computer’s operating system. 

    Berners-Lee also developed the first web browser, allowing users to access files stored on the first web server (Berners-Lee’s computer at CERN). He implemented the concept of a URL (Uniform Resource Locator), specifying how and where to access desired web pages. 

    What started out as an internal project to help particle physicists share data within their institution fundamentally changed not just computing, but how most people experience the digital world today. 

    Back at Fermilab, cluster computing wound up working well for handling the Tevatron data. Eventually, it became industry standard for tech giants like Google and Amazon. 

    Over the next decade, other US national laboratories adopted the idea, too. SLAC National Accelerator Laboratory—then called Stanford Linear Accelerator Center—transitioned from big mainframes to clusters of smaller computers to prepare for its own extremely data-hungry experiment, BaBar. Both SLAC and Fermilab also were early adopters of Lee’s web server. The labs set up the first two websites in the United States, paving the way for this innovation to spread across the continent.

    In 1989, in recognition of the growing importance of computing in physics, Fermilab Director John Peoples elevated the computing department to a full-fledged division. The head of a division reports directly to the lab director, making it easier to get resources and set priorities. Physicist Tom Nash formed the new Computing Division, along with Butler and two other scientists, Irwin Gaines and Victoria White. Butler led the division from 1994 to 1998. 

    High-performance computing in particle physics and astrophysics

    These computational systems worked well for particle physicists for a long time, says Berkeley Lab astrophysicist Peter Nugent. That is, until Moore’s Law started grinding to a halt.  

    Moore’s Law is the idea that the number of transistors in a circuit will double, making computers faster and cheaper, every two years. The term was first coined in the mid-1970s, and the trend reliably proceeded for decades. But now, computer manufacturers are starting to hit the physical limit of how many tiny transistors they can cram onto a single microchip. 

    Because of this, says Nugent, particle physicists have been looking to take advantage of high-performance computing instead. 

    Nugent says high-performance computing is “something more than a cluster, or a cloud-computing environment that you could get from Google or AWS, or at your local university.” 

    What it typically means, he says, is that you have high-speed networking between computational nodes, allowing them to share information with each other very, very quickly. When you are computing on up to hundreds of thousands of nodes simultaneously, it massively speeds up the process. 

    On a single traditional computer, he says, 100 million CPU hours translates to more than 11,000 years of continuous calculations. But for scientists using a high-performance computing facility at Berkeley Lab, Argonne National Laboratory or Oak Ridge National Laboratory, 100 million hours is a typical, large allocation for one year at these facilities.

    Although astrophysicists have always relied on high-performance computing for simulating the birth of stars or modeling the evolution of the cosmos, Nugent says they are now using it for their data analysis as well. 

    This includes rapid image-processing computations that have enabled the observations of several supernovae, including SN 2011fe, captured just after it began. “We found it just a few hours after it exploded, all because we were able to run these pipelines so efficiently and quickly,” Nugent says. 

    According to Berkeley Lab physicist Paolo Calafiura, particle physicists also use high-performance computing for simulations—for modeling not the evolution of the cosmos, but rather what happens inside a particle detector. “Detector simulation is significantly the most computing-intensive problem that we have,” he says. 

    Scientists need to evaluate multiple possibilities for what can happen when particles collide. To properly correct for detector effects when analyzing particle detector experiments, they need to simulate more data than they collect. “If you collect 1 billion collision events a year,” Calafiura says, “you want to simulate 10 billion collision events.”

    Calafiura says that right now, he’s more worried about finding a way to store all of the simulated and actual detector data than he is about producing it, but he knows that won’t last. 

    “When does physics push computing?” he says. “When computing is not good enough… We see that in five years, computers will not be powerful enough for our problems, so we are pushing hard with some radically new ideas, and lots of detailed optimization work.”

    That’s why the Department of Energy’s Exascale Computing Project aims to build, in the next few years, computers capable of performing a quintillion (that is, a billion billion) operations per second. The new computers will be 1000 times faster than the current fastest computers. 

    The exascale computers will also be used for other applications ranging from precision medicine to climate modeling to national security.

    Machine learning and quantum computing

    Innovations in computer hardware have enabled astrophysicists to push the kinds of simulations and analyses they can do. For example, Nugent says, the introduction of graphics processing units has sped up astrophysicists’ ability to do calculations used in machine learning, leading to an explosive growth of machine learning in astrophysics. 

    With machine learning, which uses algorithms and statistics to identify patterns in data, astrophysicists can simulate entire universes in microseconds. 

    Machine learning has been important in particle physics as well, says Fermilab scientist Nhan Tran. “[Physicists] have very high-dimensional data, very complex data,” he says. “Machine learning is an optimal way to find interesting structures in that data.”

    The same way a computer can be trained to tell the difference between cats and dogs in pictures, it can learn how to identify particles from physics datasets, distinguishing between things like pions and photons. 

    Tran says using computation this way can accelerate discovery. “As physicists, we’ve been able to learn a lot about particle physics and nature using non-machine-learning algorithms,” he says. “But machine learning can drastically accelerate and augment that process—and potentially provide deeper insight into the data.” 

    And while teams of researchers are busy building exascale computers, others are hard at work trying to build another type of supercomputer: the quantum computer.

    Remember Moore’s Law? Previously, engineers were able to make computer chips faster by shrinking the size of electrical circuits, reducing the amount of time it takes for electrical signals to travel. “Now our technology is so good that literally the distance between transistors is the size of an atom,” Tran says. “So we can’t keep scaling down the technology and expect the same gains we’ve seen in the past."

    To get around this, some researchers are redefining how computation works at a fundamental level—like, really fundamental. 

    The basic unit of data in a classical computer is called a bit, which can hold one of two values: 1, if it has an electrical signal, or 0, if it has none. But in quantum computing, data is stored in quantum systems—things like electrons, which have either up or down spins, or photons, which are polarized either vertically or horizontally. These data units are called “qubits.”

    Here’s where it gets weird. Through a quantum property called superposition, qubits have more than just two possible states. An electron can be up, down, or in a variety of stages in between.

    What does this mean for computing? A collection of three classical bits can exist in only one of eight possible configurations: 000, 001, 010, 100, 011, 110, 101 or 111. But through superposition, three qubits can be in all eight of these configurations at once. A quantum computer can use that information to tackle problems that are impossible to solve with a classical computer.

    Fermilab scientist Aaron Chou likens quantum problem-solving to throwing a pebble into a pond. The ripples move through the water in every possible direction, “simultaneously exploring all of the possible things that it might encounter.” 

    In contrast, a classical computer can only move in one direction at a time. 

    But this makes quantum computers faster than classical computers only when it comes to solving certain types of problems. “It’s not like you can take any classical algorithm and put it on a quantum computer and make it better,” says University of California, Santa Barbara physicist John Martinis, who helped build Google’s quantum computer. 

    Although quantum computers work in a fundamentally different way than classical computers, designing and building them wouldn’t be possible without traditional computing laying the foundation, Martinis says. “We're really piggybacking on a lot of the technology of the last 50 years or more.”

    The kinds of problems that are well suited to quantum computing are intrinsically quantum mechanical in nature, says Chou.

    For instance, Martinis says, consider quantum chemistry. Solving quantum chemistry problems with classical computers is so difficult, he says, that 10 to 15% of the world’s supercomputer usage is currently dedicated to the task. “Quantum chemistry problems are hard for the very reason why a quantum computer is powerful”—because to complete them, you have to consider all the different quantum-mechanical states of all the individual atoms involved. 

    Because making better quantum computers would be so useful in physics research, and because building them requires skills and knowledge that physicists possess, physicists are ramping up their quantum efforts. In the United States, the National Quantum Initiative Act of 2018 called for the National Institute of Standards and Technology, the National Science Foundation and the Department of Energy to support programs, centers and consortia devoted to quantum information science. 

    Coevolution requires cooperation 

    In the early days of computational physics, the line between who was a particle physicist and who was a computer scientist could be fuzzy. Physicists used commercially available microprocessors to build custom computers for experiments. They also wrote much of their own software—ranging from printer drivers to the software that coordinated the analysis between the clustered computers. 

    Nowadays, roles have somewhat shifted. Most physicists use commercially available devices and software, allowing them to focus more on the physics, Butler says. But some people, like Anshu Dubey, work right at the intersection of the two fields. Dubey is a computational scientist at Argonne National Laboratory who works with computational physicists. 

    When a physicist needs to computationally interpret or model a phenomenon, sometimes they will sign up a student or postdoc in their research group for a programming course or two and then ask them to write the code to do the job. Although these codes are mathematically complex, Dubey says, they aren’t logically complex, making them relatively easy to write.  

    A simulation of a single physical phenomenon can be neatly packaged within fairly straightforward code. “But the real world doesn’t want to cooperate with you in terms of its modularity and encapsularity,” she says. 

    Multiple forces are always at play, so to accurately model real-world complexity, you have to use more complex software—ideally software that doesn’t become impossible to maintain as it gets updated over time. “All of a sudden,” says Dubey, “you start to require people who are creative in their own right—in terms of being able to architect software.” 

    That’s where people like Dubey come in. At Argonne, Dubey develops software that researchers use to model complex multi-physics systems—incorporating processes like fluid dynamics, radiation transfer and nuclear burning.

    Hiring computer scientists for research projects in physics and other fields of science can be a challenge, Dubey says. Most funding agencies specify that research money can be used for hiring students and postdocs, but not paying for software development or hiring dedicated engineers. “There is no viable career path in academia for people whose careers are like mine,” she says.

    In an ideal world, universities would establish endowed positions for a team of research software engineers in physics departments with a nontrivial amount of computational research, Dubey says. These engineers would write reliable, well-architected code, and their institutional knowledge would stay with a team. 

    Physics and computing have been closely intertwined for decades. However the two develop—toward new analyses using artificial intelligence, for example, or toward the creation of better and better quantum computers—it seems they will remain on this path together.

  • Unbreakable glass inspired by seashells
    28 September 2021
    Scientists from McGill University develop stronger and tougher glass, inspired by the inner layer of mollusk shells. Instead of shattering upon impact, the new material has the resiliency of plastic and could be used to improve cell phone screens in the future, among other applications.

flipboard button
vk button

Extended buzzmyid.com Search Site

Author Page

A.F.L. Footy blogs

AFL Blogs

28 September 2021

AFL Blogs
  • The five players your team can least afford to lose: GWS Giants
    28 September 2021

    The Greater Western Sydney Giants finished seventh in 2021, with 11 wins, one draw and ten losses, although they are effectively sixth for the purpose of this exercise, due to their semi-final performances against higher-rated teams.

    They had six debutants, while only four players featured in every game: Callan Ward, Harry Himmelberg, Tim Taranto and Isaac Cumming.

    Here are the five players and an honourable mention that the Giants could least afford to lose based.

    Honourable mention: Sam Taylor
    Taylor featured in 17 of 22 games, the Giants winning two, losing two and getting a draw when Taylor was unavailable through injury.

    He averaged the fourth most intercepts of any player in the AFL, an average of 8.47 per game, and averaged the third-most contested marks of any Giants player.

    5. Josh Kelly
    Kelly was extremely consistent in 2021 and was rewarded with a new contract. He averaged the most metres gained in the club, with an average of 454.17. He also averaged the second-most score involvements, with an average of 5.96 per game, as well as the most tackles, an average of 5.61 per game.

    His versatility was a strength as he could play in the midfield or on the wing.

    Last but not least, he led by example.

    4. Jacob Hopper
    Hopper was in the All Australian squad, had at least 21 disposals and only missed one game, which was through injury.

    He averaged the third-most inside 50s of any of GWS player, with an average of 4.30, and the most contested possessions, with an average of 12.04.

    3. Toby Greene
    Greene featured in 18 games, including the elimination final win over the Swans and was named in the All Australian forward pocket.

    He averaged the second-most score involvements of any player in the competition, with an average of 8.28, and kicked at least one goal in every game!

    (Photo by Michael Willson/AFL Photos via Getty Images)

    2. Lachie Whitfield
    Whitfield was unavailable for the opening six games, and the club lost five of the seven games he didn’t play. Along with that, in Round 17 he was subbed out after accumulating just three disposals and the team lost to the Suns by one point!

    He averaged the third-most metres, with an average of 437.65 per game, and 4.53 score involvements per game – remarkable for a player who played predominantly on a half back flank.

    1. Tim Taranto
    Taranto was a revelation, featuring in all 24 games that they played, averaging the most disposals and the most inside 50s at the club.

    He also averaged 5.42 score involvements per game and had the second-most tackles – with an average of 5.33 tackles per game – which shows he worked hard defensively.

  • Coaching great tipping sustained success for Melbourne following drought-breaking premiership
    28 September 2021

    Coaching great Leigh Matthews believes Melbourne is well set for sustained success in the coming years.

    On the back of Simon Goodwin’s side winning the premiership on Saturday night – Melbourne’s first in 57 years – Matthews says the club’s list profile and talent across the park means they are as “well equipped as any premiership team in recent times” to continue their dominance this season for years to come.

    “If you’re talking about Melbourne, they have good big defenders and good medium smalls,” he said on Sportsday.

    “They have a fantastic one-two ruck combination (with Max Gawn and Luke Jackson) and they have (Clayton) Oliver, (Christian) Petracca, (Jack) Viney,(James) Harmes, (Angus) Brayshaw and (Ed) Langdon, that’s a fantastic midfielder group.

    “If you want to be honest, (Ben) Brown and (Tom) McDonald are OK forwards… (Bayley) Fritsch looks to be a pretty solid third tall so he’s very good.

    “Between them it’s pretty good, I think it’s reasonable to say that Melbourne is well equipped as any premiership team in recent times to be a good team for a long time.”

    The Demons will depart Perth in recent days, following a successful premiership campaign which saw them largely base themselves out west in the lead up to the Grand Final.

  • “Bottom of the ladder for facilities”: Melbourne CEO speaks on plans for new club base
    28 September 2021

    Melbourne CEO Gary Pert believes the Demons are on the “bottom of the ladder for facilities” in the AFL.

    Multiple club departments are currently situated across different locations, and the Demons are looking to harmonise all operations under the one roof.

    The club has been working on a design for a new base near AAMI park in Gosch’s paddock, which is set to be funded by a joint venture between Melbourne, the AFL, and the Victorian Government.

    Speaking to SEN’s Dwayne’s World, Pert said the club’s dire situation in terms of training facilities is recognised by the AFL.

    “If (the state government) fast tracked it, I’d be more than happy, I’ve been working on it for three years now,” he said.

    “We’ve won the premiership this year and we’re acknowledged at a government level and by the AFL that we’re clearly on the bottom of the ladder for facilities.”

    “If I was to talk to anyone at the AFL and say, ‘I’ll meet you tomorrow at the Melbourne footy club,’ basically you wouldn’t know what I’m talking about because we’re in three, four, five different locations.”

    The fact of Melbourne’s poor facilities makes the Demons triumph in Saturday’s AFL Grand Final “even more amazing,” according to Pert.

    “It’s started, we’re starting the redevelopment of our oval, we’ve had a junior-sized oval that we’ve been training on for the last 10 years and in the next few weeks that’s going to be resurfaced and enlarged to an MCG length and Marvel (stadium) width oval, so that’s a really big step for us,” he said.

    “Right at the moment, we don’t even have the facilities of a community club, which makes the performance of the players even more amazing.”

    The Demons will return home to Melbourne from Perth in the coming days after breaking the longest premiership drought in the AFL with their win over the Bulldogs.

  • Suns list boss trade update on top 10 draft trio, Brodie, Dunstan and more
    28 September 2021

    Gold Coast list boss Craig Cameron has provided an update on numerous dealings for the club during the upcoming AFL trade period, including a trio of young stars set to come out of contract in 2022.

    The Suns followed a familiar path in 2021, starting fairly before fading in the back half of the season.

    However, important wins over Richmond and the GWS Giants late in the season, coupled with the likely return of numerous injured key players next season, has given the club hope for 2022.

    2018 top 10 draft picks Ben King, Jack Lukosius and Izak Rankine are all out of contract at the end of next season and will face the pull of clubs from their home states.

    When asked about the trio, Cameron told AFL Trade Radio’s The Late Trade the Suns are confident the young guns are invested in the team’s success and hoped they might be able to complete signings over the break.

    “We’d be hopeful we can do some signings this off-season before we get into next year,” he said.

    “Our young blokes are really invested, part of our strategy from the get-go was to bring a bunch of talented young guys together, and they’ve really bonded.

    “They’ve got good hope for the future, but on-field we’ve got to show it.”

    Cameron also commented on the futures of Will Brodie and Darcy Macpherson.

    The pair are in a similar boat, both 23-years old but have struggled to cement their places in the Suns best 22.

    Brodie especially has struggled up north, playing just 24 games in five years after being drafted with pick nine in the 2016 AFL draft.

    “Will’s going into his sixth year next year, and he quite rightly wants to explore his options elsewhere and we’re happy to facilitate that if we can find something for him, we’ll work to that through the trade period,” Cameron said.

    “Darcy is a little bit the same, but he hasn’t been quite as vehement in talking to us around wanting to find another home, but if he did and found something that works for us, then we’d look at that.”

    The Suns have some work to do if they are to bring in any trade targets this off-season with all list spots currently filled and signed for next season, however, it hasn’t stopped Cameron and his team from showing interest in delisted Saint Luke Dunstan.

    “We’ll have to wait until we get through the trade period to see where everything sits, but Luke’s a good player and he played some good games this season so we’d be crazy not to look at a player of his talent if he’s available to rookie list, it just depends what happens through the trade period as to how many rookie selections we have,” Cameron said.

    2022 shapes as a big year for the Suns, with numerous signings still to be completed and coach Stuart Dew out of contract.

  • Melbourne man under investigation for attending Perth AFL grand final allegedly in breach of COVID rules
    28 September 2021
    Police in Western Australia are investigating whether a Melbourne man who may have attended the AFL grand final on Saturday is in breach of a COVID-19 quarantine direction.
  • Dermott Brereton's top AFL commentators, experts and host
    28 September 2021

    If you could put together a commentary team made of all available media talent to call a game of footy, who would you select?

    Dermott Brereton has put together who he believes are the best in the business in terms of hosting a broadcast, calling the game and providing expert commentary.

    He has gone with one host, two callers and two experts.

    See Brereton’s call team below:

    Host: Eddie McGuire

    “Eddie McGuire’s the best host. I declare my interests: he’s a great mate, but really if anyone says they’re as good a host as Eddie, it’s chalk and cheese, he’s the best host we’ve got,” Brereton told SEN’s Bob and Andy.

    Callers: Anthony Hudson and Dwayne Russell

    “Anthony Hudson and I love the way Dwayne Russell calls,” he said.

    Subscribe to the SEN YouTube channel for the latest videos!

    Experts: Jimmy Bartel and Nick Riewoldt

    “I learn the most off Jimmy Bartel listening to him. Jimmy tells me the most of what I want to know when I’m watching a game off-screen that I can’t see and is able to tell me how things have happened in a certain way,” he said.

    “I’ve got him first, he’s clearly the best in my view. I can’t work out why Channel 7 don’t use him more.

    “I love seeing how Nick (Riewoldt) pulls it apart and shows where players are from, how they’ve got there and shows how things have transpired to get there.

    “They’re the two current best at telling me something that I want to know.”

  • Melbourne CEO reveals club's Adam Cerra trade ambitions
    28 September 2021

    Melbourne CEO Gary Pert has spoken on the speculation surrounding the Demons throwing their hat in the ring for Fremantle young gun Adam Cerra, who has requested a trade home to Victoria.

    Cerra still appears to be destined to nominate Carlton in the coming days, however, Melbourne has emerged as another club interested in the 21-year old.

    Speaking on SEN’s Dwayne World, Pert believes the club’s list is in a great spot, featuring plenty of depth.

    “I’m part of all the conversations the list management group is having with all the players and all managers,” he said.

    “We’ve got a list that’s pretty strong, there’s going to be a bit of pressure on us from a salary cap point of view.

    “But I think not only have we got a talented list, but anyone who was looking at the players who ran onto the ground after the Grand Final, there were quite a few of them who deserved to be playing at the highest level.

    “We’ve got a highly talented group, but again we’ll explore all options.”

    It’s unclear how Pert and his list management team would be able to secure Cerra, considering the club has just one pick in the first two rounds at number 33.

    On the specifics of a deal for Cerra, Melbourne’s CEO confirmed they were interested in the star Docker but refused to elaborate on how it would happen.

    “All clubs are going to be talking to the representatives of a young star player like that,” Pert said.

    “Whether you can have the room or be able to pull off the deal, I’d say the majority of clubs are exploring it, but until we think anything’s going to happen with any player, we keep all those conversations highly confidential.”

    The Demons have an extremely young list, seven of their 23 players on Saturday night 21 years old or younger.

  • Grand finals don't reward season's best team, AFLW star Ebony Marinoff says
    28 September 2021
    After a mixed record in do-or-die grand finals, Adelaide Crows AFLW star Ebony Marinoff says it is a shame that one "really bad day" should cost a team a title.
  • Silvagni explains AFL draft points system, how Dogs will land Sam Darcy
    28 September 2021

    Stephen Silvagni has explained the intricacies of the points system in the AFL draft and how the Western Bulldogs will be able to secure Sam Darcy in the 2021 draft.

    Silvagni has worked as a list manager at both GWS and Carlton and has drafted many players under the current, albeit sometimes confusing rules of the draft.

    Darcy, son of Bulldogs legend Luke, is one of the most talented of his draft class, the tall forward rocketing in to pick one calculations with a six-goal haul for the Vic Metro’s under-19’s trial match in June.

    As a father-son prospect, the Bulldogs have the option to match any bid for the 18-year old.

    Silvagni explained exactly how the Dogs will look to land their prized draftee.

    “They can go into points deficit, but they will try and find the picks,” Silvagni told SEN Breakfast.

    “Each pick has a value, if he goes at pick two, that pick’s worth 2,500 points, plus a 20% discount for father-son or NGAs (Next Generation Academies).

    “So, they’ve got to make up those points within that draft.

    “They’ve got to find 2000 points, so all their picks in the draft have to add up to 2000 points, and they all go (if they match the bid).”

    However, he says that’s not the be-all and end-all, using Fremantle as a past example.

    “If they don’t have enough points with those picks in this year’s draft, the points will come off their first pick in next year’s draft, that’s called going into deficit,” Silvagni said.

    “It’s happened to Fremantle. Fremantle, last year or the year before, there was a bid on one of their NGA players, and they were in deficit in their first-round, so their first pick actually slipped back a couple of spots last year.”

    The Bulldogs first pick currently sits at number 17 in the 2021 AFL draft, meaning they will likely face bids from rival clubs on Darcy.

    The former Carlton list manager says clubs will bid if they think it can benefit them down the track.

    “Ultimately, by me bidding, (you have to ask yourself) ‘Is it going to help me out or is it going to help other picks come in for you, do I really value that player and are we a chance to get him?’” Silvagni said.

    “If you value the player, sure (bid), within reason.

    “I always said that if you value that player and it’s going to help you get that player or it's going to help you get something further down the line, then bid.”

    A club whose bid is matched on a player will then collect the draft picks and points the rival club used to match the original bid.

    Collingwood father-son prospect Nick Daicos and South Australian Jason Horne-Francis are the other two who appear a possibility to be the number one draft pick in 2021.

  • International Cup locked in for 2023
    28 September 2021
    Due to ongoing international border closures impacting travel to and from Australia, the AFL has announced that the next AFL International Cup is scheduled for 2023.

    The event was originally scheduled to be held on the Sunshine Coast in 2020 but was postponed until 2021, due to the ongoing Covid-19 pandemic.

    AFL Executive General Manager Game Development, Andrew Dillon, said it was important to provide certainty for international participants and stakeholders.

    “Given the challenges around the re-opening of borders, and the need for international teams and organisers to proceed with planning the International Cup, we have made the decision that 2023 is the ideal year in which to next host the event.”

    “The International Cup remains a key aspect of the games growth overseas and the scheduling of the event in 2023 returns us to our original three-year schedule. We know that is has been frustrating for teams to prepare for the event alongside Covid and the appreciate the the patienceshown by all of our international players.”

    New Zealand will field both a Men’s and Women’s team for the first time in the competitions history.

    The post International Cup locked in for 2023 appeared first on AFL New Zealand.