Coronavirus – What We Know and What We Can Do

 

Coronavirus Considerations

 

By: Douglas Dluzen, PhD

 

 

     It’s amazing to think that in the span of a few months the entire world has fallen under siege to a microscopic virus. The Coronavirus, also known as COVID-19, began in Wuhan, China and has since infected individuals in over 100 countries and in a majority of states in the United States. As of March 11, 2020, COVID-19 was confirmed in over 115,000 people and will likely spread its way to every continent, country, and state in the coming weeks.

Our ability in the United States and globally to slow and stop the spread of this virus will depend on local and regional preventative strategies and available healthcare resources. Below is a short consolidation of information about COVID-19 and considerations on how to best combat its spread.

 

    What We Know:

     The novel Coronavirus emerged in or near Wuhan, China sometime in late December of 2019. Dr. Ai Fen is the Director of the Emergency Department of Wuhan Central Hospital and identified a patient with a SARS-like infection. If you recall, SARS was an epidemic scare in 2003 caused by a coronavirus related to COVID-19. Dr. Fen began working with her colleague Dr. Li Wenliang on similar cases in the hospital, while at the same time alerting authorities about a potential new outbreak threat. Both doctors were muzzled by the Chinese government to limit the scope of publicly-available information, presumably to keep a panic from occurring or perhaps to outright deny the existence of the infection. Dr. Wenliang was harassed by police and government authorities and tragically died on February 6th due to the coronavirus. We’ll come back to more recent scientific muzzling later on in this article.

     We know that COVID-19 is related to SARS, MERS, and other coronaviruses that can cause the mild common cold. However, COVID-19 has different and unique genetic and epidemiological characteristics that make this a novel and previously undetected virus. The US Centers for Disease Control and Prevention has very detailed information about the corona family of viruses, as well as how COVID-19 spreads. The World Health Organization also has extensive information about COVID-19 and both websites should be some of the only places you go to for information on the prevention, symptoms, and treatments for COVID-19.

This is extremely important as the spread of disinformation and xenophobia has put a lot of unreliable information online related to COVID-19 and how it spreads. It has become so problematic that the United Nations called this an ‘infodemic’ and these false claims could contribute to the spread of the disease. Relevant, factual, and life-saving information will help contain this pandemic and it’s important to familiarize yourself with the sources of the information you’re getting. By sticking with these trusted organizations and websites, you’re helping yourself and your community stay as healthy as possible.

We also know that the world wasn’t ready for an outbreak of this magnitude. Within a few short weeks, COVID-19 infected tens of thousands of people in China, and as mentioned before, COVID-19 has spread almost everywhere. Italy is on complete lock-down and Germany is preparing for infection rates as high as 70%. While these countries have formidable healthcare systems, the sheer numbers of potential infected will drive how this pandemic really plays out, particularly if countries try to avoid severe curfews, city lock-downs, and transportation bans.

 

What we can do:

  1. #Flattenthecurve

Julie McMurry, MPH, has started an online database of resources that examine the stress of this pandemic on the healthcare system (you can even sign up for emails to alert you when new information becomes available). The nuts and bolts of the ‘Flatten the Curve’ movement is that anything one can do to ameliorate the spread of the virus within the US or abroad will lead to less pressure on each country’s health care system.

The curve comes in with respect to how many hospital beds and how many cases there are in any given region. An influx of patients that goes beyond the infrastructural capabilities of a given hospital or medical system increases the risk of patient mortality, spread of the infection to health care professionals (who will undoubtedly suffer both illness and psychological turmoil during this), and perpetuate other unforeseen complications. Simple activities such as: washing your hands often with soap and water, limiting yourself to essential travel, and avoiding large crowds for the coming months are little things everyone can do to help flatten the curve. This is critical, as reports from Italy’s frontlines are grim (read Dr. Silvia Stringhini’s and related threads on this) and it’s the most severe cases that will require prolonged hospital stays with important (and limited) resource expenditures. On top of this, little has been done to test the infrastructure of many developing countries who are just beginning to see the infection spread and which will undoubtedly burden the global response.

School systems around the United States have now begun the arduous task of restructuring higher education in real-time. Major universities have closed down live classes and are in the process of moving education on-line. My school, Morgan State University in Baltimore, has followed suit and this is all about flattening the curve to limit viral spread. It’s likely major sporting events, concerts, parades, rallies, science fiction and fantasy conventions, and other large gatherings will be postponed or cancelled. These SHOULD happen, as painful as it will be for the coming months. This will help limit the spread of the virus and ultimately flatten the curve.

 

  1. Know the Symptoms

I won’t go into great detail here, other than to say that COVID-19 symptoms can be easily confused with the common cold and flu. Below is a helpful guide previously compiled from CDC and WHO resources about spotting the subtle differences among the three and here is a link for the CDC symptoms guidelines. But remember, as this virus is currently spreading, new information may become available and it’s best to check in often, particularly if you’re worried you may have become infected. Knowing the symptoms will also help you and the healthcare system by keeping cases of the common cold out of the clinic so resources can be diverted to COVID-19 patients.

 

 

Right now, COVID-19 has a much higher mortality rate than the flu and arguments that the virus isn’t deadly or serious are unjustified. This may change as infections rise and deaths decrease, but right now the mortality rate is an order of magnitude (at least) higher than the seasonal flu. Knowing and appreciating this helps us all flatten the curve.

 

  1. Follow the Lead

Some countries have been very effective in limiting the spread of the virus. South Korea stumbled early but now seems to have a grip on the outbreak. Taiwan is a great example of a country that has dealt with prior epidemics (i.e. SARS) and put in healthcare and leadership infrastructure to ensure a proactive and rapid response. Cases are limited in Taiwan in spite of its connections and proximity to mainland China. Effective communication and crowd management were key elements to Taiwan’s containment success – things the United States and other large democracies are doing retrospectively.

   The United States certainly fumbled the football early with respect to appointing appropriate leadership, establishing new funding mechanisms, installing effective screening procedures, and reliably communicating the seriousness of this disease to ensure limited infectivity. Messages from the Administration have been muddled at best and at times downright confusing. As well, a mistake at the CDC caused an early shortage of testing kits for COVID-19, which has now been addressed: this means ramped up production of testing kits is only now just beginning.

There has also been speculation that scientists at the CDC, National Institutes of Health (NIH), and elsewhere have been muzzled with respect to reporting results and findings about the outbreak. All messaging must go through Vice President Pence’s Office first (despite his questionable history of controlling the HIV epidemic in his home state of Indiana). The good news is that there appears to be one scientist above any political messaging related to coronavirus: Dr. Anthony Fauci, Director of the National Institute of Allergy and Infectious Diseases.

If there’s one person to listen to during this entire crisis, it’s Dr. Fauci. He’s arguably one of the most important infectious disease doctors in the world and has the knowledge and resources to implement essential preventative and therapeutic interventions. He has the respect of the research and scientific community and the gravitas to be able to give clear messaging about the coronavirus, even in some cases correcting the President during Cabinet meetings and news conferences. This is a positive sign that our doctors and scientists are capable of doing their work to help develop new vaccines, treat patients, and spread the word on effective practices to flatten the curve. It is worth noting that Dr. Fauci is also spreading the word in Congress and on TV that things will get worse in the US before they get better, so we must be prepared. However, as of writing this article, the White House has ordered that some top-level COVID-19 meetings with government health officials be considered classified to limit release of these discussions and who can access them. It remains to be seen how this will impact the US response to the virus, as key personnel have apparently been left out of important meetings since January.

In summary, while COVID-19 is a dangerous and deadly disease, we can come together as a society to combat it. We can buy time for our scientists to develop a vaccine, we can buy time to slow the infection rate to give doctors the resources they need to treat patients, and we can be proactive about which information we choose to listen to and or ignore. Hopefully we can all do our part to #Flattenthecurve and stay informed, stay responsible and calm, and stay healthy.

Douglas Dluzen, PhD, is an Assistant Professor of Biology at Morgan State University in Baltimore, MD. He is a geneticist and has studied the genetic contributors to aging, cancer, hypertension, and other age-related diseases. Currently, he studies the biology of health disparities and the microbiome in Baltimore City. He teaches evolution, genetics, and scientific thinking and you can find more about him on Twitter @ripplesintime24. He loves to write about science and society in his science fiction.

Epigenetics and Environmental Influence

Epigenetics is the new dogma of genetic research. If there’s a gene involved with a disease, or a genetic variant associated with a particular trait, you can bet that there will be some epigenetic mechanism that also contributes to that disease or trait.

Epigenetics is the field of study to understand the additional levels of control with respect to how the genome is structured and regulated. This includes when and where a gene is expressed and it means looking beyond the A’s, T’s, G’s, and C’s and to the chemical modifications that can influence the outcome of the primary gene sequence.

While investigating how the chemistry of our genome shapes our biology, we’ve come to understand that our environmental exposures also have a major say in the epigenetic control of our genes. Below is an article that first appeared in Undark Magazine, reprinted with permission, about how a chemical spill in Michigan may have lasting, generational consequences in the genomes of those caught in the aftermath. It’s called “Uncertain Inheritance: Epigenetics and the Poisoning of Michigan”.

Uncertain Inheritance: Epigenetics and the Poisoning of Michigan

In 1973, a toxic chemical was mixed into tons of farm feed, sickening livestock and exposing millions of Michiganders. Should later generations worry?

December 18, 2017 by Carrie Arnold

Jim and Ida Hall buried their daughter Jerra in a family plot at the bottom of a grassy rise. Several times a year, Jim Hall drives just over a mile from his home on North Main Street in the town of St. Louis, Michigan to Jerra’s headstone in the back corner of Oak Grove Cemetery in his 1997 Chevy pickup. In the 12 years since complications from a rare heart defect claimed the life of their brown-haired toddler, her family continues to cover her grave with stuffed animals (frogs were her favorite). Hall gently sweeps off the leaves and debris covering the childhood paraphernalia and wipes his callused hands on a pair of worn jeans, his tall frame stooped by grief. He stops and stares at the inscription: “Two years, two months, too little.”

“We didn’t know what else to write,” he said.

Jerra’s headstone sits where an umbrella of majestic oaks gives way to the dreadlocks of vines and grasses of a small wetland in the geographic center of Michigan’s Lower Peninsula, a little more than a mile from the chemical plant that once produced a toxic flame retardant called PBB, short for polybrominated biphenyl. Hall can’t help but think it may have killed his little girl.

“When your daughter is born with a heart condition and doesn’t survive,” he said, “you just wonder.”

The problem: Baby Jerra herself was never exposed to PBB. In fact, she was born some 30 years after the Michigan Chemical Corporation, headquartered in St. Louis, accidentally mixed several hundred pounds of PBB into livestock feed and subsequently stopped producing the chemical. Until a few years ago, a granite tombstone marked the shuttered site.

Still, in the early 1970s, virtually all of the Lower Peninsula’s 8.5 million residents consumed meat, milk, and eggs contaminated with PBB before the mistake was discovered. The Michigan Department of Agriculture called it “the most costly and disastrous accidental contamination ever to occur in United States agriculture.” Hall, who had just turned 11 in 1974 when the mix-up was uncovered, was among those residents — and because he lived just a few blocks from the plant, he likely got an extra dose. His grandparents’ home, where he often stayed, was even closer: just three houses away and directly downwind.

The exposure almost certainly caused chemical changes to his genes, which affected not the sequence of letters in his genome, but rather how those genes are switched on and off through epigenetic information that sits on top of DNA. Recent animal studies have suggested that epigenetic changes could be passed down through generations, potentially causing inherited health problems in children, grandchildren, and great-grandchildren.



And yet, the science behind transgenerational epigenetic inheritance, as such a transfer of maladies would be known, remains highly controversial, and researchers have never been able to prove its existence in humans. That’s not for want of trying, however, and Michele Marcus, an epidemiologist at Emory University, believes the PBB mix-up in Michigan could prove crucial to that effort. Her preliminary work investigating this macabre, natural experiment has already suggested that the children of exposed mothers have many health problems possibly linked to PBB exposure. But the real question mark remains men like Hall, who were exposed to the chemical as children, grew up and married women who weren’t, and then had children themselves.

Could those offspring — who were never exposed to PBB in the 1970s, and who were never exposed to even residual PBB while developing in the wombs of exposed mothers — nonetheless experience health impacts linked to the chemical? It’s a question with profound implications.

“If you look at it through the lens of environmental justice, it changes the debate,” Marcus said, “because it’s not just, ‘Okay. I can make a decision to work in an industry that might be hazardous to my health,’ but ‘Am I making that choice for my grandchildren?’” There are people who don’t have a choice about being exposed, she added.

At the time of the incident, little was known about the biological effects of PBB exposure, and even today, only a handful of the 84,000 chemicals currently approved for use in manufacturing and industry have ever undergone real regulatory scrutiny. Even in the small subset of chemicals that have, government toxicologists mainly check for acute health effects on human populations, and while there are some testing procedures for chronic exposures, the potential impacts three generations on is not even a consideration. Now, with President Donald Trump’s proposed budget cuts at the Environmental Protection Agency, the chemical industry will likely enjoy only more freedom, and less scrutiny — a prospect that a growing number of scientists worry could reverberate across generations in ways that we are only at the earliest stages of understanding.

For researchers like Marcus, this makes grappling with the potential echoes of Michigan’s PBB exposure all the more urgent — and not least because kindred chemicals remain in wide circulation. Although PBB itself fell out of use in the years following the Michigan incident, other brominated flame retardants, including polybrominated diphenyl ether (PBDE), are ubiquitous in everything from electronics and vehicles to furniture and textiles. Both PBB and PBDE are persistent organic pollutants, a large class of compounds that includes chemicals like DDT, polychlorinated biphenyls (PCBs), and dioxins that resist being broken down in the body and the environment. And if Marcus finds that PBB can affect multiple generations — even those with no direct exposure to the chemical — “it’s probably going to be true for a lot of other chemicals,” she said.

If Marcus is right, it could upend not just medicine, but whole strata of legal, regulatory, and even ethical bedrock. Could insurers refuse coverage to the great-grandchildren of people exposed to a chemical toxin? Where would liabilities end? What are the real-world implications of personal health problems linked not to some chemical exposure that unfolded in our lifetimes, but at some distant point in our past lineage? “It changes the rules,” said Joe Nadeau, a geneticist at the Pacific Northwest Research Institute, a nonprofit biomedical research facility based in Seattle. “It’s one thing to have the direct exposure. We all know that, we all worry about it,” Nadeau said. But if the consequences of those environmental exposures can be delivered to our children or grandchildren, then we don’t just have ourselves to consider. “We have a responsibility to do better,” he said.

Hall feels that responsibility more deeply than most. When his daughter was sick, doctors told him that her heart defect was just dumb luck, and Hall had no reason to search for alternate explanations. But when he joined a community group working to clean up the contaminated Michigan Chemical site, Hall began to connect the dots between Jerra’s illness and all the PBB he had inhaled and ingested as a child.

“I hope that we can get research on [those of] us that have been affected,” he said, “so that this can be figured out.”

That the legacy of PBB exposure hangs like a dark cloud over generations of families — and over vast areas of rural Michigan — is a profound understatement. The stories vary in their details, but most begin with a slow awakening to the idea that something wasn’t right. Maybe the cows were acting funny, or the chickens seemed off. And then, as livestock began to die, the awakening tended to grow into a certainty that was typically met, at least at the outset, with patronizing dismissal by authorities.

That’s how it went down for dairy farmer Frederic Halbert. Dawn broke with hesitation on Thursday September 20, 1973 through overcast skies as Halbert traveled the five miles from the family’s farmhouse to the milking parlor. As the first group of black-and-white cows ambled in, they seemed lethargic, and they didn’t walk so much as stagger unevenly across the parlor.

Despite weighing in at 1,500 pounds, Holsteins have delicate stomachs. Since the quality of a cow’s diet directly impacts the quality and quantity of milk it produces, Halbert paid close attention to what his cows ate. It’s why he ordered 65 tons of a new cattle feed — called Dairy Ration 402 — from the Michigan Farm Bureau between the end of August and the first week of October 1973. Halbert, who had worked as a chemical engineer before returning to dairy farming, had grown interested in the new cow chow — a mix of grains and vitamins — because it contained small, silvery pellets of magnesium oxide to improve digestion and boost milk production. Halbert used DR402 in the fall and winter to supplement the farm’s home-grown fodder. Not long after, the cows stopped eating and grew listless in the spacious, open-air barn. Milk production plummeted. Halbert’s vet ran countless tests and submitted samples and entire animals for analysis by the state, none of which provided any concrete answers.

“My dad’s herd was his life,” said Lisa Halbert, Frederic’s daughter, who was only a toddler at the time. “He knew something was wrong, and he felt that no one was giving him any answers.” (Both journalist Joyce Egginton and Halbert himself would chronicle his long search for answers in books, “The Poisoning of Michigan” and “Bitter Harvest.”)

Across the state, dairy farmers were noticing similar behaviors in their herds. Cows began falling ill. But with no diagnosis — no infectious disease or concrete factor to blame — veterinarians and the Farm Bureau suggested the illnesses may be caused by fungus-contaminated feed from a cold, wet summer, or by bad animal husbandry. Halbert knew that wasn’t the case, and after months of testing turned up nothing, he turned to DR402 as the culprit, since it was the only recent change to his cow’s diets.

He sent samples to labs around the country, including to the U.S. Department of Agriculture’s National Animal Disease Center in Ames, Iowa. In January 1974, their tests revealed an unknown chemical contaminant: an unusually heavy compound that no one could discern. Analysis from the Wisconsin Alumni Research Foundation (WARF) also showed that DR402 from Halbert’s farm contained very little of the magnesium oxide that had been listed among the feed’s ingredients. When the Ames lab ran out of funding to identify the contaminant in DR402, feed was also sent to USDA scientist George Fries in Maryland, who identified the mystery chemical as a type of polybrominated biphenyl (PBB). Halbert immediately realized what had happened, since Michigan Chemical made both magnesium oxide and PBB.

Instead of shipping magnesium oxide to the Farm Bureau, Michigan Chemical had accidentally shipped several thousand pounds of a flame retardant marketed under the brand name Firemaster. To the naked eye, the Firemaster pellets looked for all the world like harmless magnesium oxide mixed in with the DR402 feed — but it wasn’t harmless.

PBBs are dumbbell-shaped compounds consisting of two attached carbon rings (the phenyl groups) with varying numbers of bromine atoms linked to either end. When added to children’s clothing, upholstery fabric, or electronics, they interfere with the item’s ability to catch fire, making them one of the first commercial flame retardants and providing Michigan Chemical with a veritable goldmine. Still, an internal memo revealed that corporate chemists had expressed safety concerns if PBBs were ingested or inhaled. In 1968, people in Japan ate rice bran oil contaminated with chemically similar compounds including polychlorinated biphenyls (PCBs). After 14,000 people in Fukuoka Prefecture reported difficulty breathing, skin rash, weakness, and skin discoloration, public health officials discovered PCBs flowing through coils used to heat the oil.

PBBs, PCBs, and related chemicals are stored in the body fat and not excreted out, which gives them decades to cause health effects. Because PBBs were never meant for the food supply of humans or animals, however — and given the industrial bravado of the 1970s — Firemaster was marketed as safe. When Michigan Chemical learned that the PBB issue had been traced back to their plant, they disposed of PBBs in the county landfill and in a nearby burn pit that now sits on a golf course. Both are now designated as Superfund sites. The company stopped producing PBBs for good on November 20, 1974.

Six months before that, the Michigan Department of Agriculture began quarantining dairy farms whose milk products were found to have PBB levels greater than 1 part per million. By November, that level had been lowered to .3 parts per million. Although the state did not mandate that farmers have their quarantined animals killed (supposedly to prevent them from taking legal action), farmers felt they had little option but to allow the state to collect their animals, truck them to the northern tip of the Lower Peninsula in Kalkaska, shoot them, and bury them in mass graves.

Other livestock, including chicken (and their eggs) and pigs, were implicated in the feed contamination, multiplying the exposures — and the animal executions — across the state. The exposures included not just farm families, but consumers who ate contaminated beef or chicken or pork, or who consumed contaminated milk, cheese, or eggs, as well as workers who produced the chemical in factories — and their spouses and offspring.

Direct exposure to PBBs, whether through consumption or absorbed through skin or inhalation, has been linked to an increased risk for a variety of ailments, from short-term incidence of skin rashes, hair loss, and muscle and joint issues, to longer term thyroid problems and changes in hormone levels.

Lisa Halbert is circumspect about her own health issues as an adult and the PBBs that her father unwittingly imported to their dairy cow operation when she was a child. She’s blonde and broad-shouldered, and she greeted me in knee-high rubber boots and a forest green Lorax shirt that she wore to the March for Science in Washington earlier this year. Now in her mid-40s, Halbert had taken a medical leave from her work as a dairy veterinarian to recover from hip replacement surgery and a hysterectomy.

“I got spayed,” she quipped.

Her memories of the PBB disaster remain vivid. When she learned as a child that her favorite cow, named Flopsy, was to be included in the state’s quarantine-and-slaughter action, she says she and her sisters discussed ways to save her, including sneaking her into the woods and then borrowing a truck to move her somewhere safe. But the oldest Halbert sister was a mere seven years old, and Flopsy was quickly collected and dispatched to Kalkaska.

The family eventually removed the feeding station where their doomed cows ate the contaminated DR402. Today the parlor still stands, but is not used to collect any milk sold for human consumption. Lisa Halbert recalls the family bulldozed one of the barns used to house cows during the quarantine phase, but many of the other outbuildings still stand.

The elder Halbert, meanwhile, is now a terse, moon-faced man in his 70s. He lives about nine miles from the dairy, situated just north of Battle Creek, and a tour of the grounds today reveals no signs, overt or covert, of the contamination that forever altered the family.


By comparison to these sorts of experiences, Jim Hall and his family were far removed from the PBB drama as it was initially unfolding. And because he didn’t live on a contaminated farm or work in the factory, epidemiologists at the Michigan Department of Community Health, which established the Michigan PBB registry in 1976 to gather and analyze data on exposed residents, didn’t recruit Hall to be part of their study cohort.

Still, Hall’s brother developed Hodgkin’s lymphoma in the early 1980s, not far from a cluster of cancer cases around the town of Breckenridge, five miles east of St. Louis. In 2008, five years after the birth of his daughter — and three years after the family buried her — Hall found a lump in his throat while driving. When he went to the doctor complaining of fatigue and difficulty swallowing, he found out that precancerous nodules adorned his thyroid gland like ornaments on a Christmas tree. Doctors removed it.

Initial data gathered from the PBB cohort study revealed other health problems among the hundreds of families and thousands of individuals recruited. Like other infamous toxicants such as bisphenol A and dioxin, PBB is classified as an endocrine disruptor, meaning that it interferes with the body’s array of natural hormones. Growing concern about endocrine disruptors since the 1990s has spurred an avalanche of research linking these chemicals to thyroid problems, diabetes, obesity, fertility problems, changes in pubertal development, and hormone-sensitive cancers such as breast and prostate cancer.

“We can detect a very large number of chemicals in essentially everybody’s bodies,” said Jonathan Chevrier, an environmental health scientist at McGill University. “Chemicals that we have reason to believe may interfere with hormones, for instance. That’s complete news to most people.”

Studies launched from the Michigan Department of Community Health’s long-term monitoring of affected residents revealed a variety of findings. Some research has shown an elevated prevalence of liver, neurological, and immunological problems among Michigan farm families, but these issues have not been consistently linked to PBB blood levels. A 2011 study by Marcus and others at Emory University found that women exposed to PBB in utero were more likely to experience miscarriages. A few other early studies — all on small groups of children — suggested that those exposed to PBBs were more likely to have developmental delays. But for all this work, in 2011, the study hit a crisis point. The state had run out of funding and, with the disaster increasingly confined to the dusty archives of history, it seemed like little more could be learned.

Michele Marcus, though, disagreed. Trained as a reproductive and environmental epidemiologist, Marcus had worked alongside Irving Selikoff at the Mount Sinai School of Medicine in New York, years after he conducted some of the first investigations into the human health effects of PBB. As a young professor at Emory University working with the CDC in the mid-1990s, she began investigating how PBB exposure continued to affect health after 20 years. As someone interested in how early life experiences shaped a person’s long-term health, Marcus knew that the decades-long study provided an invaluable set of data on the long-term health impacts of a chemical exposure. She eventually took over the research and the Michigan PBB Registry now resides at the Rollins School of Public Health at Emory University.

“Because I knew about the PBB incident, I thought: Wow, these people were exposed to a chemical we think is an endocrine disruptor. Maybe they might be experiencing these problems, and this would be a much stronger type of evidence, because we could measure individual exposure levels and look at the relationship to individual health outcomes. It’s much stronger evidence than just looking at trends over time, because over time, lots of things are changing,” Marcus said.

Given her training, Marcus was also well aware of the sensitivity of developing organisms. In 1995, she proposed to study the health issues affecting daughters of women who had been exposed to PBB. But it wasn’t until 2008 that her participation in a meeting in Washington, D.C. for the Institute of Medicine (now the National Academy of Medicine) introduced her to the idea that a father’s exposure to PBB could have an effect on his children. No one had verified the idea in humans, but Marcus’s PBB cohort, she was convinced, provided the perfect case study.

To understand why, it helps to cast back nine years before that 2008 meeting, to the spring of 1999. It was then that Michael Skinner, a reproductive biologist at Washington State University in Pullman, recalls one of his postdocs, Andrea Cupp, bursting through his office door. Skinner and Cupp had been working to understand the process of how a fetus became male or female, and injected pregnant rats with a variety of hormone mimics, including methoxychlor (a pesticide) and vinclozolin (a fungicide), to study how the process was disrupted. The resulting adult males had low sperm counts and reduced fertility, but these were not the type of groundbreaking results either scientist was looking for. Cupp’s next task was to breed the males exposed to vinclozolin in the womb (the F0 generation) to check for effects in their offspring — rats that hadn’t been directly exposed to the chemical.

As it would turn out, however, Cupp had accidentally bred the children of the F1 rats by mistake, creating what would be considered an F2 generation. Visibly upset, she had come to Skinner’s office to explain the mix-up. He told her to analyze the rats anyway, figuring that the task would help take her mind off the error. Several weeks later, Cupp hustled into Skinner’s lab again, this time brimming with excitement. The testes on the F2 rats, she had discovered, looked exactly like those on the F1.

“Of course, I didn’t believe her and I made her go back and repeat it 15 times,” Skinner now recalls. But no matter how many times Cupp bred the rats, out to F4, 90 percent of the male offspring had reduced sperm counts and lowered sperm motility — even though they themselves had never been exposed to vinclozolin. “Then I knew that we had stumbled on something that was important,” he said.

Cupp and Skinner now had to figure out how this was happening. The high rate of testicular abnormalities ruled out direct DNA mutations, which were more random and happened at a lower frequency. That left the duo with a type of genetic change that decades of scientific dogma told them wasn’t possible. British biologist Conrad Waddington first coined the term epigenetics after performing an experiment involving fruit flies and heat. A horizontal vein bisects the miniscule wings on the poppy seed-sized insect, controlled by a gene named crossveinless. Waddington made this vein disappear in newly bred generations of fruit flies by repeatedly heating the pupae. In breeding those flies that lacked the bisecting vein, he discovered that their offspring had the same trait, despite never being exposed to heat themselves. His analysis didn’t reveal any mutations in the genetic sequence itself, so he called it an “epigenetic” phenomenon — a way of describing genetic changes occurring above and beyond the ordering of genes.

After the discovery of DNA’s double helix, biologists had already shown that small chemical tags called methyl groups could act as epigenetic markers on the genome — punctuation marks, in a sense, on the cascading paragraphs of A-C-G-T that make up our genetic code. Another analogy used by researchers at Harvard Medical School: “If DNA is like a book, epigenetic marks are like sticky notes. Epigenetic marks tell our cells whether and how to read the genes.”



Carrie Breton, an epigeneticist and environmental health scientist at the University of Southern California, adds the key insight that folks like Skinner and Cupp and Waddington were uncovering: “These marks are fluid, not static,” she told me. “It’s yet another layer of complexity on top of the genetic code.”

That complexity came as something of a surprise. Work by developmental biologists in the 1980s had long suggested that newly fertilized embryos appeared to strip all the methyl groups off their genomes, essentially creating a blank slate from which to work. By this understanding, the multigenerational epigenetic inheritance observed by Cupp and Skinner was technically impossible. In the intervening quarter century, however, it had become more and more evident that an embryo’s epigenetic slate wasn’t nearly as blank as researchers thought.

Angelman syndrome, characterized by a small head, seizures, and developmental disabilities, for example, is caused by a deletion in chromosome 15 in the mother. The exact same mutation inherited from the father causes Prader-Willi syndrome, which causes an insatiable appetite. (Both genetic disorders can also be caused if a child inherits two copies of a section of the chromosome from the father and mother, respectively, rather than one from each.) Some type of marker had to exist that distinguished the parent from which the mutation arose. To date, scientists have discovered more than 100 different imprinted conditions in humans. This type of genomic imprinting provided the first indication that epigenetic markers could be inherited.

Skinner’s study provided more conclusive evidence, and when he and Cupp finally published their work in Science in 2005, the study made an immediate splash. Skinner’s phone rang off the hook with calls from reporters from around the world. Six months later, scientific and industry pushback began. Critics pointed out that Skinner injected the animals with vinclozolin rather than lacing their food. Since humans were exposed to the fungicide by ingestion or inhalation, not injection, the harmful effects to the rats may not mimic human issues. A postdoc found guilty of scientific misconduct in 2010 only fueled the fire.

Still, when other labs conducted related experiments with different chemicals, they found broadly similar results. “I think it sank in that what I was telling them was going to affect lots of different things,” Skinner said.

“This changes how you think about toxicology,” he later added. “The whole system changes.”

Skinner began testing other environmental compounds, including DDT, bisphenol A, and dioxin (including Agent Orange), for their ability to induce multigenerational epigenetic changes in rats. In almost every case, he found them. Importantly, each chemical created its own unique epigenetic fingerprint on the rat’s DNA, giving scientists the opportunity to potentially investigate what specific compound an individual was dosed with.

His work in this area placed him in high demand at universities and scientific conferences. In 2008, he spoke at the Institute of Medicine meeting in Washington, D.C., where Michele Marcus first learned of his work. As part of a committee looking into the health effects of Agent Orange exposure among Vietnam veterans, Marcus attended as an expert in the long-term health effects of early life environmental exposures, and as Skinner talked about his research, she immediately began thinking about the PBBs in Michigan.

By that time, Marcus had already tracked a series of health issues in the children of women exposed to PBB. In one study, she and her co-authors found that daughters who were breastfed and exposed to high levels of PBB in the womb started their first periods an average of one year earlier than their counterparts who were exposed to lower levels. They were much more likely to have difficulties carrying a pregnancy to term. Sons were more likely to have urogenital birth defects. Both genders showed a high prevalence of thyroid problems associated with PBB exposure. Overall, six in ten Michiganders in the 2000s had PBB blood levels that were higher than 95 percent of the non-exposed population.

But poor health outcomes in the offspring of Michiganders first exposed to PBBs — if those outcomes were related to the initial contamination at all — would have been the result of direct exposures, too. To show multigenerational effects, Marcus would need to find grandchildren of those exposed to PBB in the 1970s.

As soon as Marcus heard Skinner’s talk on dioxin epigenetics in rats, however, she began to wonder whether epigenetics could also be playing a role in causing poor health outcomes down the generational lines of these Michigan families. Could the children and grandchildren and great-grandchildren of these exposed populations manifest with PBB-related health issues even though they themselves had never come in contact with the chemical?

Marcus teamed up with Alicia Smith, a geneticist at Emory University, to begin investigating whether PBB affected an individual’s epigenetic regulation, and whether those changes could be delivered across generations. After repeatedly being asked in community meetings about the effects a father’s exposure could have on his offspring, she and Smith secured a grant from the NIH and headed up to Michigan to start collecting fresh blood samples.


Marcus wasn’t the only one paying close attention to Skinner’s work in those years. Hearing him speak at an earlier National Academy of Sciences workshop, bioethicists Mark Rothstein of the University of Louisville and Gary Marchant of Arizona State University both say they immediately grasped the immense implications of this line of research. Rothstein recalled asking an industry representative in attendance about the potential legal impacts of epigenetic inheritance, and whether chemical manufacturers could one day be held liable for impacts on individuals removed from exposures not just by years, but by whole generations.

“All the blood seemed to drain from his face when I asked that question,” Rothstein said, “because now I’m suggesting that there’s kind of unlimited liability for manufacturers to as yet unborn generations.”

After the meeting, Marchant and Rothstein delved through the scientific literature to find other writing on the bioethics of epigenetics. When they couldn’t find anything, they decided to write one of their own. Their resulting paper, co-authored with Marchant’s student Yu Cai and published in Health Matrix: Journal of Law-Medicine in the winter of 2009, proposed more questions than it answered. Were epigenetic test results protected under the Genetic Information Nondiscrimination Act? (Probably not). How did multigenerational epigenetic inheritance affect the environmental justice movement? How could scientists use these findings without stepping over the line into eugenics?

The question that loomed over everything, however, was how this issue would be handled by the courts. To try and answer that issue, legal experts have often turned to another toxic chemical that seemed to produce multigenerational effects that, while not epigenetic in nature, might provide a model for tracing liability in epigenetic cases.

In 1966, pathologist Robert Scully asked 35-year-old Arthur Herbst, then a gynecological oncologist at Massachusetts General Hospital, about some bizarre tumors he had seen. A handful of girls, ranging in age from 15 to 22, had been diagnosed with a rare type of vaginal cancer called clear-cell adenocarcinoma that had, until this point, only been identified in post-menopausal women at MGH. Scully, Herbst, and some of their colleagues assembled these cases into a formal study to figure out what was going on. The answer came not from some dusty medical journal but rather a question from one of the girls’ mothers. She asked Herbst if the cancer could be associated with the DES she took while pregnant.

DES is short for diethylstilbestrol, a synthetic form of estrogen first developed by British chemists in 1938. Three years later, the FDA approved it for the treatment of “estrogen-deficient conditions” in both humans and livestock. After a small animal study hinted that low estrogen might cause miscarriages, obstetricians began prescribing it to their female patients with high-risk pregnancies. Marketing campaigns by manufacturers like Wyeth and Eli Lilly and Company touted DES as a new wonder drug, and women with normal, healthy pregnancies soon began taking it. A 1953 study seemed to suggest that DES didn’t prevent miscarriage, but it seemed like such a benign drug that doctors continued to prescribe it and women continued taking it into the 1970s.

Many of the studies that look at DES ask daughters if their mother took the drug. “Half the time the mom didn’t even know what she was given,” said Linda Titus, a cancer researcher at Dartmouth College who is heading up much of the research. “That was back in an era, roughly 1940 to 1970, when women didn’t ask too many questions.”

When Herbst investigated the mother’s question, he found that a small group of daughters of women who took DES while pregnant were astronomically more likely to develop clear-cell adenocarcinoma. The resulting paper, published on April 22, 1971 in the New England Journal of Medicine with the bland title “Adenocarcinoma of the Vagina,” became a landmark in the scientific literature.

“This is the first time a drug given to the mother was associated with the subsequent development of the malignancy in her offspring, in humans,” said Herbst, now in his mid-80s, who continues his work on DES at the University of Chicago. “It’s clear that the fetus doesn’t react like a mature adult. The growing embryo and fetus are sensitive to things that sometimes we don’t know about.”

Further studies revealed an increased risk of urogenital abnormalities in both the sons and daughters of women who took DES, as well as an increased risk of breast cancer in the DES mothers. To win a legal judgement, however, the plaintiffs needed to prove that DES caused their health problems. Apart from clear-cell adenocarcinoma, many of the harms reported by DES mothers and their children have a range of causes.

Epidemiological studies, according to Titus, have been able to show a strong association between prenatal exposure to DES and certain negative health outcomes. This work allows researchers to infer causation, but it generally cannot prove the exact cause of a disease in a particular individual. Titus points out that because some DES-related conditions are so rare and so strongly associated with prenatal exposure to the drug, in some cases epidemiologists can be “reasonably confident” of the link. To hold up from a legal standpoint, the courts ruled that epidemiological studies must show that exposure to a toxin more than doubles the risk of the resulting health problem — and that’s no easy task.

There are few “relative risks higher than 1.5 in many disease areas,” Titus said.

DES mothers and children who wanted to sue the manufacturers for damages faced other legal hurdles. Many times, several decades had elapsed between when the mothers took DES and when they filed their claims. As well, many companies sold DES between 1941 and 1971, and women rarely knew who manufactured the specific pills they took. In the 1980 case Sindell v. Abbott Labs, the California courts settled this issue by holding DES manufacturers liable for their proportion of the market share of the drug. If Company A made 40 percent of the DES sold in a particular area, they would pay 40 percent of the judgement awarded.

The success of DES mothers and daughters in winning tens of millions of dollars in class-action lawsuits, along with an opening of investigations into the possible harm to DES grandchildren, spurred interest in lawsuits by the third generation. In 1991, lawyers for nine-year-old Karen Enright, who was born prematurely and subsequently developed cerebral palsy, brought a case against DES manufacturer Eli Lilly. The grandmother took DES in 1959 while she was pregnant with her daughter, born in 1960.

Karen herself was born in 1981, and the suit alleged that the daughter’s reproductive tract abnormalities caused the granddaughter’s condition. In a six-to-one ruling in 1991, the New York Court of Appeals rejected the suit on philosophical grounds. Writing the majority opinion, Chief Judge Sol Wachtler noted that, “For all we know, the rippling effects of DES exposure may extend for generations. It is our duty to confine liability within manageable limits. Limiting liability to those who ingested the drug or who were exposed to it in utero serves this purpose.”

The idea of proximate causation is implicit in the Enright ruling, according to Steve Gold, a professor of environmental law at of Rutgers Law School. Tort law was developed to address typically immediate harms from actions, dealing well with cases such as a broken arm from playground equipment or severe burns from hot coffee. Cause occurs immediately before effect. Many environmental harms, even those that affect those directly exposed, often take decades to show up. Injury to the children and grandchildren of these individuals can be too far removed for courts to agree that the manufacturer should be liable. Multigenerational suits “offend many people’s sense of justice. It’s valuable [to the courts] to make liability finite,” Gold said.

Third-generation DES lawsuits aren’t based on epigenetics, but the courts are likely to apply similar principles to plaintiffs in any multigenerational epigenetics cases. “There’s no fundamental legal issue that would interfere with a lawsuit,” Gold said. “[But] by the time you’re three generations on, finding proof that an exposure caused harm is going to be difficult to find or show.”

Still, Texas-based attorney Andrew Lipton points out that the revolution in the scientific, popular, and legal understanding of genetics could alter a judge’s willingness to hear a case based on epigenetic evidence. A solid case depends on demonstrating evidence of exposure both via historic records and via every chemical’s unique epigenetic fingerprint. Epidemiologists also need to provide proof that this fingerprint isn’t caused by other environmental exposures and that it leads to harm.

Lipton believes that lawsuits based on multi-generational epigenetic evidence are coming down the road. “The problem that you’re going to keep running into, though, is finding statistical significance for second generation or third generation injuries, and linking it back to a particular genetic or epigenetic defect,” he said.

For the Michigan victims of PBB, it’s something of a moot point. In the 1960s, the multinational corporation Velsicol purchased a controlling interest in Michigan Chemical before dissolving the company after the PBB disaster. The plant was eventually torn down, and in a bargain struck with state officials and the Environmental Protection Agency, Velsicol was permitted to escape blame for any human damages outside the plant in exchange for paying the state of Michigan $38.5 million to clean up and construct a concrete cap over the site in St. Louis. To the best of his knowledge, Carl Cranor, a bioethicist at the University of California, Riverside, said no one has yet won a lawsuit on the grounds of multigenerational epigenetic harm, nor is the science yet ready for such a suit.

Taking regulatory action may prove no easier, despite Wachtler’s opinion in the Enright case that it was the FDA’s role to promote safe medications, not the court’s. Although contaminants like PBB fall under the purview of the EPA, the principle remains the same. The problem, according to Mustafa Ali, former senior advisor of environmental justice and community revitalization at the EPA, is that testing chemicals even for acute exposures in single generations is expensive, often costing upwards of $330,000 per compound. And the anti-science, anti-regulatory climate in Washington makes it profoundly unlikely that multigenerational toxicological testing will begin as long as the current administration is in power.

“When you’re trying to develop policy, you need to have strong science in place. Many of the chemicals haven’t been evaluated for that level yet. That’s where one of my great concerns comes in with the cutting of budgets and sort of taking a step back from this needed science,” Ali said.

Still, many experts believe it is only a matter of time before human epigenetic inheritance moves, legitimately, from the murky realm of speculative and uncertain science to something more concrete and culturally — and perhaps even legally —speaking, more consequential. In that sense, should the work of Marcus and other researchers ultimately yield fruit, it can seem impossible to overstate the potential impacts.


The early morning of December 14, 2013 revealed a snowscape in St. Louis, Michigan. Temperatures hovered right around freezing at the town’s four Superfund sites, including the site of the former Michigan Chemical Corporation plant. Murray Borrello and Ed Lorenz, both professors at nearby Alma College were assisting Marcus with a blood draw event at the St. Louis Town Hall. They knew her team was counting on a large turnout to get the samples she needed for her coming years of research. Bad weather was the last thing they needed. By 8 a.m., two hours before the blood draws were scheduled to begin, they realized they had the opposite problem — a line of people wrapped around the building, shivering and stamping their feet to stay warm.

The team ran out of chairs for everyone and by noon, they had run out of needles and Vacutainer tubes for blood. Marcus’s assistants raided local hospitals and health departments for supplies, but even with their scavenging, the Emory team had to turn people away. Nearly 200 people showed up to provide samples and fill out questionnaires. Though he wasn’t part of the initial Michigan Department of Community Health study, Jim Hall was among them — in part because Marcus wanted to cast a wider net, a hunch that paid off. When Hall’s results arrived in the mail several months later, he learned his body still contained massive amounts of PBB — 5.5 parts per billion, 16 times more than many farm families in the 1970s — and fathers like Hall were precisely the individuals Marcus needed for her study.

Because PBB persists in the body’s fat stores potentially indefinitely, children of exposed mothers had direct exposure to PBB in the womb and during breastfeeding. In daughters, egg cells destined to become grandchildren form during fetal development, giving even maternal grandchildren a direct PBB exposure. To prove transgenerational epigenetic inheritance, Marcus would have to wait until the arrival of the great-grandchildren of exposed mothers, the equivalent of Skinner’s F2 rats.

Fathers, however, only provided their DNA. If, like Jim’s wife Ida, the mother was not exposed to PBB (Ida grew up in Middleton, Michigan, 20 miles from the PBB mix-up but has no PBB in her blood), Marcus and Smith could start seeing effects in the grandchildren of these fathers.

“We need three generations, and the first generation has to have an unexposed mom, so that we know that we know that the second generation was not directly exposed in the womb and that they only get the exposure information that’s contained in the father’s epigenome,” Marcus said.

“Since the exposure was in the early 1970s,” she continued, “we know quite a number of families that have three generations, although, it’s going to be difficult to identify families that have that exposure pattern.”

Marcus and Smith are trying to find 20 to 25 of these families, and hope to start a small pilot study by 2019. The hardest part has been finding unexposed mothers, since nearly 90 percent of Michiganders were thought to have been exposed to PBB in the 1970s, and upwards of 80 percent of them still have elevated blood levels of PBB.

Diving more deeply into PBB’s toxic legacy, including finding evidence that could provide conclusive proof about multigenerational epigenetic inheritance, has hit a major roadblock. In the years before the Michigan Department of Community Health handed over control of the study to Marcus, they transferred the paper and electronic study files to the Michigan Public Health Institute (MPHI) for digitization and storage. When individuals signed consent forms to have their information transferred to Emory, MPHI was unable to locate many of them due to missing data. For once, Marcus’s broad smile and easygoing manner slipped from view as she discussed the issue, though she now says that the Michigan Department of Public Health is committed to searching for the additional historic data. Without it, her studies won’t have the critical number of people needed to show the potential multi-generational effects of PBB.

Community members are even more frustrated at the delays from MPHI, which have been hindering research that could finally answer health questions that have loomed over them for nearly half a century. “I think it’s that they don’t care,” Lorenz said bitterly.

Perhaps so, but John Greally, a geneticist at the Albert Einstein College of Medicine in New York suggests that no matter the data and no matter the cohort, proving epigenetic inheritance in humans will be a tall order. It doesn’t help, he adds, that epigenetics has become a scientific buzzword that has different meanings to different people. “We rarely clarify what we mean when we use the word,” he said in an email.

To make her case, Marcus will not only have to show changes in regulation to specific genes caused only by exposure to PBB, her team will ultimately have to outline a mechanism for how those changes lead to disease. The list of variables that can cause changes to DNA methylation seems endless — different cell types have different patterns of regulation, and natural variations in the genetic code can also affect DNA methylation patterns. Cross-sectional studies that use individuals already diagnosed with a specific condition can’t determine whether epigenetic changes caused the disease or are the result of the disease — although researchers are actively exploring ways to do this. Peel back one layer of complexity, Greally said, and you find another — Russian nesting dolls made of As, Ts, Cs, and Gs.

“To my knowledge, scientists have only been able to show multigenerational epigenetic inheritance in animal models, never in humans,” he said. “But that isn’t to say that it can’t or won’t be done.”

In the meantime, families in St. Louis, Michigan struggle to leave because their homes are so close to the old Michigan Chemical plant. Jim Hall can’t get ahead because his thyroid problems and his daughter’s illness drove him into severe debt. As Hall and others see it, the one thing they could give their children — a healthy start in life — was taken from them by a chemical exposure nearly half a century before. No lawsuit or regulation will return these stolen inheritances, but Hall — and Marcus — both said they hope that the research now underway will one day mean that fewer communities are poisoned, and perhaps that fewer genomes will march forward into the future scarred by the exposures of generations past.

“This is our home,” Hall said. “There’s no reason this should have happened, and I don’t know if they’ll ever be able to really clean it up.”


CORRECTION: An earlier version of this piece incorrectly identified one of the organizations that analyzed cattle feed for Frederic Halbert. It was the Wisconsin Alumni Research Foundation (WARF), not the Wisconsin Animal Research Foundation.

Carrie Arnold is a freelance science writer from Virginia. She covers all aspects of the living world and has written for a variety of publications including Mosaic, Aeon, Scientific American, Discover, National Geographic, and Women’s Health.

This article was originally published on Undark. Read the original article.

— Interview with 3-D Nebulae Artist Teun van der Zalm

We at Cosmic Roots and Eldritch Shores recently had a chance to sit down with artist Teun van der Zalm and chat with him about his work. Teun uses mathematical modeling and computer rendering to create stunning, 3-D images and videos of nebulae. His work can be seen in art galleries across the globe and the videos he creates have been used in short films and other visual media in the last few years. His work is both exquisite and inspiring (check out his website for more) and we are excited to be featuring some of his work over the coming weeks, starting with the first of his Nebulae Short Films

Greetings Teun, and thank you for sitting down with us to talk a little bit about your work. First of all, can you please tell us a little bit about your journey becoming an artist?

Since my childhood I have always been fascinated with creating images. Early on, I got a video camera from my father and made short films with my friends using Legos. In 2004, I began to study animation at the Utrecht School of the Arts. While there, I worked on two short films, City of Lights and Tears/De Breuklijn. These were screened at more than fifty film festivals all over the world. After finishing my studies in 2008, I worked for four years as a freelancer on various jobs, including creating animations for documentaries and short films. Then in 2013, I began my journey into the particle realm. First, I began with the abstract. I searched for new forms using physics and other mathematical methods. Now I have developed ways to create nebulae structures in 3D using only mathematics.

What inspires you about the universe? Why did you choose to express that interest in nebulae? 

Good question. Honestly, everything inspires me about the cosmos. From really big nebula structures to the single form. It was a logical decision for me to create these stellar nurseries. I began simulating abstract forms as a sort of art project. I wanted to create more stylistic work.  I was searching for a way to merge my old passion (astronomy) and my skills I had developed the last couple of years.

What’s a fun fact that you learned about the universe while creating your work?

That is difficult to say. My work began mostly with looking at Hubble images as an artist. Always asking myself the question, how would I do this on the computer? I began my research by dissecting all the parts that make up nebulae and reading as much information as I could about the supernova process. Combining these, I began to develop the look and feel of the nebulae using the math I learned. In this process, perhaps I learned more about the complexity of our universe and how I could translate that onto a computer.

Your website mentions that you use the Perlin Noise algorithm to generate your artwork. Can you share your process with us

I began using physics to build the basic nebulae form. This means that real world physics has been adapted to simulate the flow of the clouds. At this point, it is a volumetric cloud. After that, I transform these volumetric clouds to billions of smaller particles. Then I use different Perlin Noise variations to add all the fine detail and layers.

How long does it take to generate an image? Or one of those gorgeous videos?

That depends. The further I developed my process; the more details I formed. But that also increased the render and production time, especially as I began to use more and more particles. To take that finer detail into moving images was a real challenge. I had no idea how to do it without rendering for months. Then, while working on a project at the beginning of 2017, I developed a way to slice-render the nebulae as images and place them in a compositing program where I can animate the camera and add the stars. This way it was faster and more flexible for me.

Visual media has inspired a generation of scientists, amateur astronomers, and even the movie industry. For example, the work on visualizing gravitational lensing in the movie Interstellar resulted in academic research papers about the underlying mathematics. Do you have any plans on expanding your artwork to other media or to academic research?

The last 6 months I have worked on several VFX projects. But also, commercials, music videos and others. So, I already have been able to expand my work to several other media productions. I would really like to connect and develop my style with an academic background, but I’m still searching for a university or company who is interested.

Do you have any advice for aspiring artists looking to work with mathematics in their work?

Well, I am more of a system-building person. In my work the process is king. Of course, there will be a lot of math, but if I fail at some other point in the process and I can’t figure it all out, I will take a longer route than some other people to find the solution. I always dissect the problems into parts to figure them out, not all at once.

What can we expect to see from you in the coming months? Any exciting projects or releases?

Yes, I have many VFX projects, planetarium shows, and other visual presentations coming up. For example, I will team up with the biggest dark ambient music artist, Lustmord, to create a journey through our universe for his upcoming shows. We will even create a full-dome experience.

What books are on your nightstand? 

Oh, not many. I am a bit embarrassed about that. A while back I read lots of books about the art of filmmaking and science-related books. Even many Stephen King stories. But at this moment I am mostly inspired by the technology, movies, television, and old 70’s progressive rock music.

Thank you so much for your time!

You’re welcome!

— A Review of Science in 2017

It’s the end of the year and we’re all reflecting on what’s been accomplished, what’s been changed, and what’s been forgotten about during the last 12 months. I’ve been thinking about all of those things in my own life, realizing now that I still need to fix the broken chest in my bedroom and clean out the basement (again). But I’ve also been looking back at how science and science research has fared this year. It’s been up and down to say the least and and I want to highlight a few victories and failures as we move on to 2018.

The Good:

Last April, the March for Science moved science firmly into the political spectrum as scientists, clinicians, and supporters globally united to promote truth and increase awareness for science-related causes. For many science supporters, this was the first time taking politically-motivated action to protect the integrity of the field. In 2018, I expect to see science becoming even more polarized. Topics like climate change and alternative fuels will undoubtedly become platform issues for politicians (and in some cases, they already are) and keeping real facts up above the ‘alternative facts’ movement may be one of the most important challenges scientists will face away from the bench.

Organizations like Action 314 have sprung up to help elect scientists and doctors to the Hill in Washington. I’m keen to see what effect this will have on science-related policy and on U.S. policy in general. These new candidates are human after all and can still make mistakes. They will be prone to the same pressures and special interests that plague the Capitol today.

In spite of these misgivings, it IS time for science to be central in politics. As funding changes in response to the new tax overhauls in the coming years, it’ll be important to see how research and education is affected. There’s been some success already. The March for Science was spear-headed by a younger generation more willing to get into the weeds and call-out our state representatives. The final tax bill still includes the graduate student tuition waiver, which, had it been taken out, would have made going to school for an advanced degree extremely cost-prohibitive. If it weren’t for grass-roots mobilization by graduate students and organizations like AAAS, the attack on science education would have claimed a major victory.

The Bad:

While the March for Science deservedly belongs in the Good Category, it does also belong here in the Bad. The March for Science has had many problems since its inception. There have been major issues of transparency and inclusivity that have stymied the movement’s ability to energize the next generation’s willingness to stay involved. This is very unfortunate. The movement grew almost too fast and was soon very different from what it started as on Facebook last February – as a direct result of the Trump Administration’s beginning dismantlement of science and the increased use of ‘alternative facts’.

I remember my own frustration as we waited weeks and weeks last spring for any answers on what would be occurring during the actual march in D.C. on Earth Day. Understandably, many of those early organizers had never undertaken such an international task, with all its logistical and financial challenges. But most of those original members have now left the March for Science and sharply criticized the way the organization is run.  If the March for Science wants to extend itself beyond a flash in the pan, the current board will need to sort out their in-house issues or risk losing all momentum built up last winter.

The Trump Administration HAS claimed some victories. Government scientists have been banned from presenting at conferences, including barring climate scientists from discussing their work and stripping the words ‘climate change’ from the Environmental Protection Agency’s website.

Other areas of concern include provisions in the new tax bill that open up the Artic National Wildlife Refuge for drilling. And while only parts of the refuge will be available for drilling, ANWR represents one of the last pristine wildernesses in the United States and these areas should be protected for as long as possible. The long-term protection of ANWR and National Parks is essential not only for wildlife protection and management, but for sustaining a better future for our children. Hopefully this messages rings strongly next year.

Trump’s 2018 budget proposal also called for drastic cuts to the NIH and NSF. While most politicians in Congress have called this a non-starter, it will be interesting to see the final funding levels for both of these essential agencies in the 2018 fiscal year budget. Trump has made it clear that basic science research, particularly with our climate and alternative fuel sources, will not be a priority for his administration.

The Ugly:

Some topics of interest in science this year are so controversial they fall into the ‘Ugly’ category. This is for a variety of reasons, which I’ll detail below.

First, the United States has formally withdrawn from the Paris Climate Accord, becoming the only country on the planet to withhold support for this imperative initiative. This really is the proverbial ‘burying the head in the sand’ and is not only short-sighted, but just stupid. Many U.S. cities have already declared they will follow the accord’s guidelines for carbon emissions in lieu of the federal government’s tepid response to this movement. In the long run, withdrawing from this accord will hurt the U.S. competitively in energy jobs, infrastructure, and on our national debt as more powerful storms continue to pummel the coasts. Am I being too hyperbolic? Perhaps not enough, really.

The next issue concerns the reproducibility crisis that is rippling throughout science. I’ve written before about the crisis, but briefly, scientists and researchers are beginning to find that many important studies cannot be reproduced outside of the original laboratory the observations were first made in. This year isn’t necessarily a watershed year for addressing this issue, however the focus on improving the philosophy and process of science is certainly a current topic of debate at many institutes.

I struggled to decide where the crisis should be mentioned in this article. Reproducible and rigorous research are an integral part of the scientific process and checking the work of others is an essential component therein. In fact, new theories and protocols can’t be pushed forward without this systemic re-analysis. It’s a good thing.

However, the increased media coverage of this plays into the hands of those who want to tarnish science and continue to chip away at the pillars of truth. The narrative needs to be changed to focus on how the crisis is an important cornerstone of how science is conducted, validated, and pushed forward. As long as that narrative can be casted with doubt by proponents of ‘alternative facts’, fake news, and whatever other agenda, for me the crisis stays in the Ugly category.

Finally, there continues to be a major patent dispute for ownership of the CRISPR gene editing technology between the Broad Institute and University of California-Berkeley. Billions of dollars are at stake and this year the first ruling was in favor of the Broad Institute. But Jennifer Doudna and UC Berkey have appealed and the next hearing will be next year (for a nice review of this issue see this article in the Wall Street Journal).

Anytime individual researchers compete with one another it can turn into high drama. But this time the stakes are enormous and I can’t help but feel this may be yet another time that a woman in STEM research gets handed the short straw (Doudna was the first to develop the CRISPR system as a gene editing tool in bacteria). Plus, it’s never a win when two highly-respected, international research institutes are cutthroat going at it with one another to win control of an innovative technology. This will get uglier, and it could potentially hold sway over who the Nobel Committee awards the Nobel Prize to for this discovery. Stay tuned.

And for now, that about wraps it up. Not to end on a downer, but I’m hoping things improve a bit next year. See you in 2018!

— The Size-Luminosity Relationship in Extra-Galactic Astronomy — a paper by Alex Drozd

Here we offer you an article on the curious case of the density range shared by all modern-day elliptical galaxies.

Some physical process we don’t understand is driving all elliptical galaxies to evolve towards a certain common density. Is this an attribute of galaxy structures themselves, or a consequence of an undiscovered physical law? The size-luminosity relationship is a mystery – evidence of a major trend that astronomers never would have expected.

Alex Drozd explores the phenomenon.

The Size-Luminosity Relationship in Extra-Galactic Astronomy

Alex Drozd

 

 

When I started undergraduate research at the University of Alabama, I met Dr. Nair, an assistant professor in the Department of Physics & Astronomy. New to the University, she was looking to build a team of undergraduates to help streamline the basic chores involved in her research. Dr. Nair is an extra-galactic astronomer; she studies galaxy evolution and the structural differences between galaxies in the local universe and galaxies at high redshifts — that is, the light of objects moving away from us is in effect stretched into the longer, lower frequency wave lengths, and the further away the object the greater the shift. When I approached her about joining the team, she directed me to the white board in her office, brimming with numbers, astrophysics equations, and hand-drawn graphs of galaxy brightness profiles.

“I’m looking for more undergraduates,” she said. “I have more data to analyze than I have time for. Are you interested in studying galaxies?”

“Sure,” I replied. “I live in one after all.”

The local universe extends from the Milky Way to a redshift of z<0.1 — about one billion light years away—where z is the ratio of the galaxy’s relative speed to the speed of light, from which a distance can be calculated. For Dr. Nair’s research, galaxies anywhere from z=0 up to z»3 — zero to five billion light years away — were studied. More distant galaxies have been discovered, but too few to constitute a statistically significant sample. She focuses on elliptical galaxies within this distance range because they are populous and visible enough to be analyzed and compared with their neighbors. She directed me to download a collection of galaxy cluster images within the z=0 up to z»3 range.

Figure 1: Middle-aged elliptical galaxies between z 0 to 3. —   This image includes the distant galaxy cluster Abell 370,  one of the very first galaxy clusters in which astronomers observed gravitational lensing, in which gravity warps spacetime and distorts the light we receive from galaxies lying beyond the gravity lens. The arcs and streaks are the gravitationally distorted images of more distant galaxies. Source: Image by NASA, ESA, HST Frontier Fields

 

In the following months, to prepare for my image and brightness analyses of elliptical galaxies, I read academic papers on extra-galactic astronomy and filled my hard drive with high-resolution images of galaxy clusters — leaving little room for anything else. It turns out that most scientific data, at least in astronomy, is available online. If you have the drive space, you can download Hubble Space Telescope images and process them yourself with free software. There are also top-of-the-line, high-priced astronomical image processing programs, like MIRA and IRIS. However, even free programs like DS9, named after the Star Trek’s Deep Space Nine, are used by both students and professional astronomers for image processing and scientific analysis. I downloaded it and loaded up all the Hubble Space Telescope images I’d been storing on my computer.

Elliptical galaxies look like giant glowing spheres of stars (Figures 2 and 3). Our own Milky Way is a spiral galaxy 100,000 light years in diameter, with long, star-filled arms curving out from the galactic core. The stars of a spiral galaxy orbit about the center in a flat galactic plane. The Milky Way won’t always be a spiral. In about four billion years, when we collide with our closest galactic neighbor, the Andromeda Galaxy, gravitational effects will cause the two galaxies to restructure. They might become a single elliptical galaxy, in which the stars chaotically orbit about the galactic center without a plane or pattern.

 

Figure 2: The classification of galaxy shapes, based on Edwin Hubble.  (Source: Image by NASA & ESA).
“Interactive Hubble Tuning Fork“, released 19/11/2012 10:00 am; © C. North, M. Galametz & the Kingfish Team
Access the really nice Interactive Hubble Tuning Fork version of this image at http://herschel.cf.ac.uk/kingfish

 

For those of you who might be depressed by such a future for our beloved Milky Way, fear not. When galaxies merge, their stars and planets rarely collide, because there is so much empty space between them.1 The average distance between stars is about 30 trillion miles.

So, humankind is unlikely to be extinguished by a galactic collision, or by the death of our Sun in five billion years. But in two billion years the Sun’s energy output will have increased to the point where temperatures on Earth will be too hot for liquid water.1 Unless of course we’ve developed technology to move our planet to a safer range.

Galactic merging events are exactly what extra-galactic astronomers like Dr. Nair study to understand the evolution of galaxies. Looking at elliptical galaxies is the best way to do this considering they are the products of mergers. Since the observable universe is billions of light years across — its edge always growing larger as the universe continues to expand — the light we receive from the most distant ellipticals is already billions of years old—meaning we’re seeing them as they were billions of years ago. Looking at progressively closer ellipticals, we can study the entire history of their evolution, from the time they were first formed all the way up to the present.

 

Figure 3: The image compares an average present-day spiral galaxy (left) with its counterpart in the primordial past (right), when galaxies were likely had more hot, bright stars.
Image credit: NASA/JPL-Caltech/STScI

Extra-galactic astronomers studying the evolution of elliptical galaxies have found a curious anomaly, referred to as the size-luminosity relationship in early-type galaxies. ‘Early-type galaxy’ is the name originally used for elliptical galaxies under the galaxy classification scheme, created by Edwin Hubble, the early 20th  century astronomer who discovered that the universe contained galaxies besides our own. The size-luminosity relationship is one of the most fascinating topics in the field of extra-galactic astronomy, and relates to the physical concept of density.

Density here refers to the amount of mass within a given volume of space. A cup of water is denser than a cup of air, and a cup of iron is denser than a cup of water. Though mass and weight aren’t the same thing, they’re directly related, and you can see that a given volume of space would be denser if it has more weight in it than one with less weight.

Galaxies have size and mass just like all other matter in the universe. Galaxies are collections of stars and gas clouds bound together by gravity. The ones with more stars are more massive than the ones with fewer, but a small galaxy with a hundred billion stars is denser than a larger galaxy with the same number of stars.

The most distant elliptical galaxies, and therefore the oldest, are extremely compact. In this context, luminosity directly relates to mass because more mass in a galaxy means more stars, and more stars means more brightness3.

When extra-galactic astronomers like Dr. Nair plot the size-luminosity relationship of elliptical galaxies, they observe something quite unexpected: Billions of years ago these galaxies were quite dense, but the degree of density varied greatly. Yet local, present-day ellipticals vary little in density. This means that over time, regardless of how compact an elliptical galaxy was to begin with, it expands — or ‘puffs out’ — to the density range common to all elliptical galaxies today4. Some physical process we don’t understand is driving all elliptical galaxies to evolve towards a certain common density. Is this an attribute of galaxy structures themselves, or a consequence of an undiscovered physical law? The size-luminosity relationship is a mystery – evidence of a major trend that astronomers never would have expected.

How do elliptical galaxies “know” when to stop growing? Is there a physical process that keeps track of and adjusts their density? How do the ultra-compact ones “know” to grow by a lot, and the less compact ones by only a little? What’s causing them to puff out?

Extra-galactic astronomers have some hypotheses about what mechanisms may be contributing to this phenomenon, the most likely one being mergers. Elliptical galaxies don’t stop merging after just one collision. In fact, they’re expected to be even more likely to collide again after a merger because they now have more mass and attract other objects nearby more intensely.

You might ask: if a merger adds both mass and size to the galaxy, why doesn’t the density stay the same? The mass of a galaxy increases, but proportionally its size increases much more5. Galaxies spin, and when mass is added to them it disturbs the angular momentum of the system, causing the orbiting bodies to spread out. So mostly likely small galaxies merge with these ellipticals in events we call minor mergers, and cause the size evolution over time, the ‘puffing out.’ This adds much more size than mass, meaning the density goes down overall. Minor mergers are also more frequent.

There are different forms of merging events: dry mergers and wet mergers. The former refers to merging events between galaxies lacking appreciable interstellar medium — gas clouds in the space between the stars – where not much interaction happens between the different pockets of gas inside the two galaxies. Because of this, dry mergers don’t have much of an effect on the overall behavior of the resulting galaxy, whereas wet mergers — where galaxies with appreciable interstellar medium are involved — induce star formation due to gravitational instabilities in the merging gas clouds. These processes can change the distribution of the angular momentum in a galaxy as the new stars drift into their orbital positions about the galactic center. It is currently thought that the Milky Way-Andromeda Collision will be a dry merging event, as not much gas will be available during the collision with which to trigger star formation.6

Other hypotheses have been proposed for the existence of the size-luminosity relationship. They range from the spreading out and dissipation of the gas clouds in elliptical galaxies — not related to merging events — to the astronomer’s favorite go-to when something about a galaxy’s mass doesn’t add up: dark matter,3 which makes up most of a galaxy’s mass but can only be indirectly detected. It’s possible that the restructuring of a galaxy’s dark matter during merging events could be causing the size evolution — assuming it interacts with itself at all.

Another hypothesis, gas dissipation, notes that not all distant galaxies are compact; a small minority are not very dense at all. Yet, even these anomalous galaxies eventually puff out — otherwise, we’d still see compact galaxies in the local universe today. Over time, they lose gas and this affects the galactic structure, decreasing size and therefore gaining density, to reach the same value of density most compact early galaxies evolve to. So, whether elliptical galaxies begin with high or low density, they evolve over time to reach the same density as every other elliptical.7

But the merger-driven model of early-type galaxy evolution is the hypothesis that’s gained the most traction. It best fits the computer simulation models that theoretical astrophysicists have, even though it possesses numerous inconsistencies, and researchers are continually finding problems with it. Until a better hypothesis comes along, it’s the one most extra-galactic astronomers are sticking with.

Except for Dr. Nair and her colleagues.

She wrote a paper demonstrating that mergers shouldn’t be able to account for just how small the scatter on the size-luminosity plot is.5 The merger model predicts a narrow range of densities that nearby early-type galaxies can fall into, but Dr. Nair and her colleagues showed, using more recently collected data and different methods of analyzing brightness, that the range of densities is much smaller than predicted by merger-driven models (Figure 4).

Figure 4: The top row shows Dr. Nair’s data, where the range of densities is smaller. The bottom row shows data collected and measured by an older method of measuring a galaxy’s brightness. (Source: Nair, et al. 2011)

 

And there’s much more to the picture. Before the anomalous size-luminosity relationship in early-type galaxies was discovered, astrophysical computer models predicted that environmental factors were a key influence in the evolution of ellipticals. Early-type galaxies in low density environments, locations without many nearby galaxies or bodies, also called isolated environments, were thought to grow less than those in high density environments like galaxy clusters, where many neighboring galaxies and bodies are in closer proximity. High density environments have a higher number of collisions and merging events and early hypotheses suggested that environment would play a huge role in galaxy evolution.8

Yet, contrary to what our current astrophysical models and simulations of galaxy evolution are still predicting, it’s been observed that environment plays no role in early-type evolution5. Regardless of whether an elliptical is in a galaxy cluster or an isolated environment, it undergoes an evolution with the same end-point, becoming about as dense as every other early-type galaxy in the nearby universe. If mergers are the explanation, how is this possible? How is it that galaxies in clusters evolve exactly as do isolated galaxies? The former undergo extensive collisions and merging events, while the latter might only experience a few merging events over their entire history. The environmental independence of galaxy evolution may be the most perplexing characteristic of the size-luminosity relationship. If the mechanism by which these processes occur were to be discovered, it could provide valuable insight into how galaxies evolve, how matter was distributed in the early universe, and what galaxies might look like in the distant future.

Even if minor mergers prove to be the mechanism by which ellipticals evolve, this doesn’t answer the question as to why compact galaxies in the early universe grew at unique rates to the same final density we observe today. Why, when we look around at the nearby universe, have elliptical galaxies stopped growing? We know of no law that states an elliptical galaxy must fall into this specific and narrow range of densities. But the empirical fact remains that ellipticals evolve to have about the same ratio of mass to volume, i.e., the same slope on the size-luminosity graph. Could this lead to the discovery of a new law of physics? Perhaps one that could describe the behavior of dark matter?

And remember those images Dr. Nair had me download? The ones of galaxy clusters in between the local and the distant universe, between z of 0 to 3? They show what you would expect: mid-distanced, middle-aged, elliptical galaxies aren’t as compact as the most distant ones; they’re larger in size, suggesting they’re getting closer to the density exhibited by early-types in the local universe. We can see them in mid-approach (Figure 1). It can be frustrating to look at. What’s causing it? I wished the answer could leap out of the graph at me. But the size luminosity relationship has remained a mystery even to veteran extra-galactic astronomers who’ve been working on it for years.

The James Webb Space Telescope is set to launch in October of 2018. With its new capabilities — for example its infrared imaging camera9 (able to see through obscuring gas and dust, see Figure 5) — astronomers will be able to make extra-galactic observations at even higher redshifts. Astronomers will be able to gather data about elliptical galaxies even further away, further back in time, and perhaps get closer to solving the size-luminosity mystery, gaining insights into how the universe we live in evolves.

Figure 5: Visible light and infrared views of the Monkey Head Nebula. Credit: NASA and ESAAcknowledgment: the Hubble Heritage Team (STScI/AURA), and J. Hester.   Using infrared we can see through more dust and gas than with visible light.

 

 

Works Cited:

1 Marel, R; Besla, G; Cox, T.G.; Sohn, S; Anderson, J. American Physics Journal. The M31 Velocity Vector. III. Future Milky Way-M31-M33 Orbital Evolution, Merging, and Fate of the Sun, 2012; Vol. 753, 1

2 Fraser, C. Universe Today. You Could Fit All the Planets Between the Earth and the Moon, 2015

3Nipoti, C; Treu, T; Leauthaud, A; Bundy, K; Newman, B; Auger, M. Monthly Notices of the Royal Astronomical Society. Size and velocity-dispersion evolution of early-type galaxies in a Λ cold dark matter universe, 2012; 422, 2, https://doi.org/10.1111/j.1365-2966.2012.20749.x, pg. 1714-1731

4Shankar, F; Marulli, F; Bernardi, M; Boylin-Kolchin, M; Dai, X; Khochfar, S. Monthly Notices of the Royal Astronomical Society. Further constraining galaxy evolution models through the size function of SDSS early-type galaxies, 2010; 405, 2, https://doi.org/10.1111/j.1365-2966.2010.16540.x, pg. 948-960

5Nair, P; Bergh, S; Abraham, R. The Astrophysical Journal Letters. A Fundamental Line for Elliptical Galaxies, 2011; Vol. 734, 2, 10.1088/2041-8205/734/2/L31

6Cox, T.J; Loeb, A. Monthly Notices of the Royal Astronomical Society. The Collision between The Milky Way and Andromeda, 2008; 386, 1, https://doi.org/10.1111/j.1365-2966.2008.13048.x, pg. 461-474

7Mancini, C; Daddi, E; Renzini, A; Salmi, F; McCracken, H.J; Cimatti, A; Onodera, M; Salvato, M; Koekemoer, A.M.; Aussel, H; Floc’h, E. Le; Willot, C; Capak, P. Monthly Notices of the Royal Astronomical Society. High-redshift elliptical galaxies: are they (all) really compact?, 2010; 401, 10.1111/j.1365-2966.2009.15728.x, pg. 933-940

8Shankar, F; Marulli, F; Bernardi, M; Mei, S; Meert, A; Vikram, V. Monthly Notices of the Royal Astronomical Society. Size Evolution of Spheroids in a Hierarchical Universe, 2013; 428, 1, https://doi.org/10.1093/mnras/sts001, pg. 109-128

9Gardner, J. The Space Science Reviews. The James Webb Space Telescope, 2006; Vol. 123, 4, 10.1007/s11214-006-8315-7, pg. 485-606

 

 

 

 

“The Size-Luminosity Relationship in Extra-Galactic Astronomy”  © Alex Drozd

Alex Drozd is a graduate of the University of Alabama. He studied astrophysics and is now working as a programmer. He is also a science fiction writer, and has previously been published by Daily Science Fiction.

 

— The 2017 Nobel Prizes in Science

The 2017 Nobel Prizes in Science

10/7/2017

The winners of the 2017 Nobel Prize were announced this week, much to the delight of scientists, readers, and enthusiasts around the world. I’ll briefly discuss the science-related awards for this year. The Nobel Prize in Economic Sciences has yet to be awarded.

The Nobel Prize in Chemistry was awarded to Joachim Frank, Richard Henderson, and Jacques Dubochet for, “developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution.” Their pioneering work has allowed researchers and clinicians to visualize the structure of drugs, compounds, and proteins at some of the highest-resolution ever seen. By understanding how these molecules look and behave in solution, better applications can be developed for their use in health and technology.

The Nobel Prize in Physiology or Medicine was awarded to Michael Rosbach, Michael Young, and Jeffrey Young for “their discoveries of molecular mechanisms controlling the circadian rhythm.” The circadian clock is the regulatory system that governs the biological clock of the human body and within human tissues. The centers in the brain that control the circadian clock regulate human and animal sleep cycles and are controlled by light and hormones.

The Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne, and Barry Barish for “decisive contributions to the LIGO detector and the observation of gravitational waves.” Gravitational waves are the curvature of spacetime due to gravitational objects colliding or moving in space. The waves propagate out from each disturbance like the ripples in a pond after a stone has been thrown. Albert Einstein and other physicists famously predicted their occurrence but did not have the technology to detect them. Until now that is!

It is always exciting to see who gets these prestigious awards. Congrats to all the winners!

— Meet the Scientist Q & A: Benjamin C. Kinney, Ph.D

Meet the Scientist Q & A: Benjamin C. Kinney, Ph.D.

7/10/17

From time to time I’ll be conducting interviews and/or Q & A’s with scientists from around the world in the blog’s new Meet the Scientist series. We’ll discuss current research, the state of science in general, and anything else of interest that might pop up. First up: Dr. Benjamin Kinney!

Dr. Benjamin C. Kinney has a Ph.D. in Neuroscience and is a neuroscientist at Washington University in St. Louis. He is also the Assistant Editor for the science fiction podcast Escape Pod. He writes science fiction and fantasy, and his short stories have been published in Strange Horizons, Flash Fiction Online, Cosmic Roots & Eldritch Shores, and more. You can find more of his writing at http://benjaminckinney.com or follow him on Twitter @BenCKinney, where he explains neuroscience concepts on his weekly #NeuroThursday feature.

Doug: Thanks for your time, Dr. Kinney! So, what got you into science?

Ben: I started like so many scientists did: with science fiction. This goes back into the mists of childhood memories for me. For as long as I can remember, I’ve always been driven by that sense of wonder and discovery. I ended up in neuroscience because it’s the biggest mystery of all: both impossibly vast, and impossibly personal.

Doug: What does your research focus on and what have you found?

Ben: I study how the brain and body change after injury to the hand and arm. In the past, I’ve worked with amputees and hand transplant patients (and cyborg monkeys), but now I work with people who’ve suffered nerve injuries. I’m particularly interested in handedness: how can it change and what can we do for patients whose dominant hands get injured. I’m just starting up a big study to compare laboratory measurements of hand function with how patients use their hands in their daily life. Hopefully we’ll be able to figure out which of those lab measurements have a real impact on patients’ quality of life. Once we do, we’ll be able to use therapy, training, and neuro-stimulation to improve the kinds of movement that matter most.

Doug: Are there any misconceptions in your field of work or in neuroscience at large?

Ben: Neuroscience is very complex, which means it can get oversimplified. I could talk all day about public misconceptions of neuroscience, but here’s one that ties directly into my own work: the idea that a person can be “left-brained” or “right brained” is complete bunk. The two sides of the brain are specialized for different skills, but you’re not “good at left-side things” or “good at right-side things.” You’re good at some things and not others – and it makes no difference whether those things are on the same side or opposite sides.

Doug: What’s one big question in the field that you’d like to see answered in your lifetime?

Ben: Every human being’s brain is different. Different folds and valleys, different networks of cells. What I want to know is: How much does that matter? There are so many problems, both scientific and medical, that we can’t address right now because of how much the brain varies from person to person. If we could predict or interpret that variation – for example, if we knew that certain neuroanatomical patterns affected an individual’s response to a psychiatric drug – we could understand and accomplish so much more.

Doug: How might we get more of the public to engage in science discussions?

Ben: I think the trick will be to get people thinking about science and scientists as part of everyday life, not just something that strange weirdos do in a mysterious basement laboratory. When I go to parties and people say, “I’ve never met a neuroscientist!”, I say, “Why not? Neuroscientists are everywhere. I’m surrounded by them every day!”

We’re living in different worlds – an inevitable part of how we as Americans so often structure our lives around work. But I think we need to pierce some holes in that to make science feel less like a mystery cult and more like something anyone can access.

Doug: You were recently brought on board as an editor at Escape Pod, congratulations! Any advice on how to approach incorporating hard science into science fiction or genre writing?

Ben: Remember that the science is there to support and inspire the story, not the other way around. If you want to write about a new piece of science or technology, I recommend focusing less on what it does, and more on what it means to people and their lives.

Doug: Do you incorporate your research interests into your writing?

Ben: Usually indirectly. I write across a broad spectrum of science fiction and fantasy, and a fair amount of it draws indirectly from my neuroscience training. I have strong opinions about human decision-making, artificial intelligence, and alien minds – whether scientific or fantastic! But every now and then I do produce a story that draws directly from my work. The most neuroscientific thing I’ve published is a silly little story called “Cyborg Shark Battle (Season 4, O’ahu Frenzy)” in the Cat’s Breakfast Anthology from Third Flatiron Press. In graduate school I used brain-machine interfaces to study how the monkey brain controls movement and Cyborg Shark Battle applies that technology for entertainment and profit in the realm of reality TV.

Doug: Running a lab can require a lot of funding and I imagine you spend a lot of time grant writing. I know that writing scientific grants and writing fiction can be different processes. As a writer of science fiction, how do you balance the two?

Ben: Sadly, the answer is “triage.” There are only so many hours in the day, so I try to use them for productive things. Thankfully “reading” is productive to a writing career, so I have ways to relax, but I’ve probably watched only 3-4 television shows in the last five years. I also sent my wife to Mars for a year, that gave me a ton of extra writing time! I recommend it for everyone.

Doug:  Can you recommend any good books about neuroscience?

Ben: Fiction or nonfiction? I’ll go with fiction, because unless you count peer-reviewed research publications, I don’t read non-fiction in my own field – I’m the wrong audience. My favorite neuroscience-focused science fiction books are Ancillary Justice by Ann Leckie, and Blindsight by Peter Watts. Ancillary Justice’s science is subtle, but the novel is permeated by a deep understanding of neurological disorders and cognitive science. Blindsight is more explicit about its neuroscience and it wraps a fascinating argument into an excellent (and terrifying) story, so I always recommend it even though I wildly disagree with it.

Doug: Thank you so much for taking the time to answers questions about your science and writing!

— Returning to the March for Science: Where are we now?

Returning to the March for Science: Where are we now?

6/16/17

It’s been almost two months since supporters around the world marched in solidarity to increase public awareness for science and speak up for informed decision-making in the government.  This feels like a good time to step back and assess the impact of the March and discuss what’s percolating as we move forward.

Overall, the reception for the March was mixed depending on who you asked. The Pew Research Center surveyed 1,012 people about their reaction to the March for Science and a summary of their findings can be read here. Pew reported that it was primarily Democrats and younger generations that supported the March for Science and thought it would help science in the future. Although this is a small sample size, it really is unfortunate that the March appears to have only enjoyed partisan support. The whole point of the movement was to encourage advocacy from individuals of all backgrounds and create a new public discourse about informed policy. In this respect, the March for Science had questionable impact on the collective view of science in the entire community.

Perhaps telling is this graphic that summarizes the viewpoints by political leaning:

Source: Pew Research Center

Add into the mix that President Trump announced earlier in June that the United States would withdraw from the Paris Climate Agreement and that he is still committed to reinvesting in coal and fossil fuel energies, it seems the March was not successful in reaching the ears of the White House. This is very disappointing, especially considering clean energy jobs are now a larger portion of the US economy than coal,  and global warming is going to have a major impact on our health and our economy in the future.

Thankfully, the message did reach ears on Capitol Hill – right where the March for Science ended in D.C. The White House’s budget proposal for fiscal year 2017 including drastic cuts to the National Institutes of Health (NIH) and many other science funding agencies. But the deal struck by Congress to fund the government through the end of September ultimately saw an increase in funding (from 2016 levels) for the NIH and other organizations, including: the United States Department of Agriculture (USDA), the National Oceanic and Atmospheric Administration (NOAA), NASA, the US Geological Survey (USGS), the National Science Foundation, and parts of the Department of Energy (DOE). Unfortunately, the EPA is still in the crosshairs of the new administration and lost funding compared with last year.

Overall, this is good news considering the people directly responsible for negotiating and enacting the federal budget appear to be supportive of a positive role for science in society and within our government. However, the battle will be taken up again later this year with the 2018 budget proposal. Science is again being threatened with devastating cuts to research, from a 22% cut to the NIH, 19% to the Department of Health and Human Services (HHS), 15% at the DOE, 13% at the NSF, and horrific cuts to the  research at the Department of Homeland Security (DHS). NASA would see a slight budget increase, but not to climate-related research.

For me, this is where any short-term gains stemming from the March for Science will be measured. How will Congress respond to cuts in science moving forward in the Trump Administration? It is incredibly unfortunate that there is a still a partisan divide when it comes to support for science. We need to work together as nation to cross those barriers and tear them down. Science impacts us all and a unified front for science advocacy makes it that much more powerful. Below is a picture of my friend fr with his sign during the march, which I think highlights this important issue:

It’s also up to the scientists and doctors and researchers and all those in the scientific community to continue engaging and speaking up about these issues. We have to work together on this. The momentum of the March will only continue if there is a sustained level of input and idea-sharing that politicians and the community find accessible. The long-term payoff of this continued discussion will be with the next generation and the development of our world’s future research and STEM community, investment in clean energy, education, and the development of new technologies to better our world. Hopefully in the budget battles to come the important gains sciences has made this year will be highlighted and used to protect funding and inspire others to invest wisely in our future.

— What Can Open Access Data Do for You?

What Can Open Access Data Do for You?
5/8/17

 

Data collection and usage is an essential component of scientific research. It’s arguably the most important. Without data, we can’t make observations about the world and deduce truths about how it works.

I wrote in my previous post that research articles are the primary source of data and dissemination of hypothesis-driven research. But while research articles concisely present some data, there is almost always a larger dataset that can still be of use to the authors of the article, and others.

There is a growing movement within science for open access of all data generated for a research project or collected by government agencies. Open access data means that the raw and processed data should be available, free of charge, to anyone. Access to data is a core tenant of the scientific process and most scientific projects funded by the United States government, often through grants from the National Institutes of Health (NIH) or National Science Foundation (NSF), will ultimately be published and released in an accessible format to the public.

However, depending on where the manuscript is published, there may be a long embargo on the release of the text or data and only those with subscriptions can access it right away. Open access data means that all the data and text is released the day the manuscript is published. This also allows collaborators, and even research competitors, to further peruse this data and use it again for different questions (if applicable).

Many journals, including the umbrella Public Library of Science (PLOS) journals, have taken this endeavor as far as possible to provide access to as much of the data used in a research article as possible. PLOS also created guidelines to determine how open a journal is and thus, the articles that are published within it. PLOS has worked hard to streamline the terminology of open access data using the ‘HowOpenIsIt?’ Open Access Spectrum and their handy evaluation tool to evaluate and rate scientific journals.

But what does open access really mean? How does one even access that data? I’ll take you through an example using the PLOS website.

I went to PLOS.org and entered in the first key term that came to mind. I started working on this post on a beautiful weekday morning in Baltimore and I heard some birds chirping at each other through my window, so I did a search using the term: bird sounds. I know very little about birds so I clicked on the first link, which directed me to a research paper entitled: Automated Sound Recognition Provides Insights into Behavioral Ecology of a Tropical Bird.

Now, there are a few things to note about this article and others like it on PLOS. First, you can download the entire article as a PDF. For many journals, even those found in Nature and Science, you may already hit your first obstacle: a paywall. This means you’ll need a subscription to the journal or publisher to gain access, and this can get very costly for institutes and individuals.

Next, this article’s supplemental data is found near the bottom of the page, which is the case for many research articles like it. Here you often find raw datasets, metadata analytics, additional graphs, and/or tables cited in the article but not necessarily featured as essential figures in the main text. You should be able to download each of these files individually. In fact, this article on tropical birds has a set of supplementary files that include the actual recordings of the bird calls used in the analysis. (This one is my favorite. It sounds like a monkey.)

You’ll also notice there is an entire section labeled ‘Data Availability’, which is located just below the article’s abstract. Here you can find all the databases that the raw and processed data was uploaded to during the publication process. These databases, like Gene Expression Omnibus, Zenodo, and Data.gov, offer datasets that are free to download and explore on your own after each manuscript is accepted and published online. Forbes created a list of 33 databases that feature open source file sharing and storage and each has its own unique sets of data that are free to explore.

So, what should we do with this data? Why is open access data important?

In theory, open data should be provided with every manuscript that used public funding to support that research. This isn’t always the case in practice, however. I mentioned the restricted access by paywalls and embargoes, where data is often hidden from public view.

Open data is a check on accountability and reproducibility and it can counter the pseudoscience that’s often in the news. For instance, climate change deniers like to argue that the Earth isn’t really warming and that global temperatures don’t change. Their arguments are supported by data provided by Berkeley’s Earth Laboratory.  While indeed this specific dataset supports the claim that the Earth isn’t warming, the dataset only provides air temperature recordings taken above land masses. Considering seventy percent of Earth is covered by water, this dataset is incomplete. Additional datasets, on the same website nonetheless, provide land and sea temperature data that more accurately depict what is occurring in our climate.

So open access data can go both ways, and the appropriate types of data need to be considered when applying these free documents to your own work or arguments.

Other sources of open data will even take the liberty of building analysis pipelines for you to use right away. ExAtlas was designed at the National Institutes on Aging, NIH, to provide a one-stop shop for gene expression analysis. Taking data analysis to the next level, Swedish statistician Hans Rosling built the GapMinder – an intuitive and interactive web-based algorithm that you can use to visualize raw datasets in specific contexts of health disparities. GapMinder highlights the many disparities in our world, including; age and income levels in the developed and developing world, all the way to the relationship between country GDP and gender-specific health span and longevity.

GapMinder is an amazing program to become familiar with and it uses publicly-funded datasets cataloged from around the world to generate meaningful results. It’s fairy intuitive to use and provides an additional dimension to analyze demographic outcomes by country and year across a variety of variables, including health, vaccination rates, income, GDP, education, gender, geopolitical region, and many others.

For example, below is a picture of the average life expectancy for every country on Earth as it relates to the total health spending each year (as a % of GDP). I’ve highlighted the United States as an example. In 1995, the US was spending almost 14% of its GDP on healthcare, with a return of about 76 years in life expectancy.

Source: GapMinder

Now, compare that with 2010, which is seen below. You can see that the US spends about 18% of its GDP on healthcare in 2010 with a very small uptick in life expectancy. This alludes to the rising cost of healthcare in the US.  Each of the other circles on the graph represents a specific country and some of them have made great gains in healthcare for little additional expenditure in the same time frame. The size of the circle is also directly proportional to the population of the country and the track of yellow dots indicates the year by year changes within the U.S.

Source: GapMinder

All the raw data on GapMinder is freely available to download and you can track any country of interest over the time frame that data is available. If you want more information on the power of this program, and why it’s free for all to use, check out the two TED Talks on GapMinder: Talk 1 and Talk 2.

The U.S. government also hosts several websites that maintain public databases. Data.gov is a good place to start to see what is available for a topic of interest, from global warming to waterfowl lead exposure in Alaska and Russia. The Centers for Disease Control and Prevention (CDC) keeps a comprehensive database of health statistics and information, as does the World Health Organization.

The U.S. Environmental Protection Agency (EPA) also has its own open data website. This is not without controversy, as EPA employees are still grappling with how to respond to cultural and administrative changes due to the new Presidency. For a time, the website was even shut down but now it appears to be up and an archive of older data from previous Presidential administrations will be provided. Fearing for the loss of publicly-funded climate data, scientists around the world have banded together to download and archive climate data stored on the EPA and NOAA websites in case data were removed and/or destroyed. There is even a data repository for these datasets called Data Refuge, where open data can be cataloged, deposited, and accessed. Considering many of the scientists who advise the EPA on environmental policy were just sacked this week, this is an important endeavor.

Moving forward, it’s critical that raw and processed data be curated and provided to the public. I hope you can get a sense of how critical this is for informed-policy and that this data is readily-usable by anyone willing to take a few moments to explore with it.

Next time, we’re going to dive into some of the recent space discoveries: including planets, space biology, and the latest NASA initiatives!

— An Overview of Peer Review and Science Publication

<a href="http://m.maploco.com/details/dd85ow1i"><img style="border:0px;" src="https://www.maploco.com/vmap/9559350.png" alt="Locations of Site Visitors" title="Locations of Site Visitors"/></a> <!-- [et_pb_line_break_holder] --><!-- [et_pb_line_break_holder] -->Visitors to the Science News page starting 4/26/17

 

An Overview of Peer Review and Science Publication
4/10/17

 

Science News and Information is a new blog featured on Cosmic Roots and Eldritch Shores. Here, you’ll find highlights of some of the most recent discoveries and breakthroughs in science and research. I’ll try and connect each topic to important societal implications, and I will do my best to remove my own opinions.

We want this space to be a source of fact. We want this space to be relevant, entertaining, and safe to explore interesting science and any underlying implications. That’s why I was so excited when Cosmic Roots and Eldritch Shores decided to put together a new science feature like this. I remember as a kid reading science fiction and fantasy and always wondering about the real science and reality behind the stories. I hope you enjoy what will appear here in the coming weeks and months, and hopefully, years.

Today we’ll start not with the excitement of the seven planets orbiting TRAPPIST-1 or the controversies of CRISPR technology (both topics I promise to return to), but with the subject of peer review and publication. Okay, I know! There’s not a great way to make the term ‘peer review and publication’ incredibly appealing. But it’s the foundation of the entire scientific enterprise and well worth a discussion. I thought this would be the best place to build our foundation as we venture to the outer rim of what we know and don’t know.

The important thing to keep in mind is that science is a process. Scientists can be wrong; we’re human after all. In the laboratory, we constantly get our hypotheses wrong, our experiments end in failure, and we knock our heads against the lab bench hoping for inspiration. Quite often we just don’t have the tools to solve the big questions and we must splash in the waist-deep waters for years until the right technology is developed to really dive into the deep end.

But when a discovery is made, it needs to be reported. This part of the scientific process is where I want to spend the rest of our discussion: peer review.

Peer-reviewed journals are the most important source of scientific information and all scientific researchers work towards publishing their findings in peer-reviewed research journals. Journals like Nature, Science, The Lancet, New England Journal of Medicine, Physical Review Letters, and Journal of the American Chemical Society all review and publish new science and often compete with one another for the most impacting work.

The process begins when a researcher believes they have enough data to convince other scientists their findings are valid and true. Depending on the subject matter, this data collection could be a small pilot project or a major research endeavor that encompasses thousands of hours of work and dozens of experiments. Typically, once the arrangement of the data is outlined, the researcher puts on his or her author’s hat and begins crafting a manuscript to present their new findings.

In a way, scientists are story-tellers and their research manuscripts present a data story. But a defense of the hypothesis is essential. Questions should be addressed, such as: Why was this experiment performed? What was observed? Why should the public care about these results? What does it mean in context of what is already known? Do the findings challenge previous findings or build upon it? Typically, the methods must be specific enough that someone else picking up the paper could repeat the experiments in their own laboratory.

Once the manuscript is completed and all the authors have signed off, it’s sent to a journal. There, it meets the first person that will review the article: the editor. Journal editors are usually experts and will read the cover letter, the abstract…ideally the entire paper…and decide right then and there if the topic of the manuscript is relevant to the journal they are working with. Just like publishing in science fiction and fantasy, certain works and topics are better fits for certain journals.

Journals like Science and Nature only publish ground-breaking work that advances a specific field or features novel approaches, methods, or technologies. Some journals are more specific: Cancer Research isn’t going to feature a paper about behavioral cognition, just like Analog probably won’t feature a classic fantasy tale about dwarves attacking a dragon’s horde…probably, unless there’s time travel!

If the editor decides to pass, the authors must decide where next to submit. However, if the editors feel the manuscript may be a good fit, the next step in the process begins. They will contact anywhere from 1-3 external reviewers to do a critical and thorough read of the entire manuscript. These reviewers are also experts in the field and contacted by the journal to provide their opinion about the quality of the science, the findings, and their interpretations. It’s the reviewers’ task to judge the entire work on its own merit.

The reviewers usually provide a written reply that is a point by point consideration of the work, with specific comments, questions, or suggested improvements to the manuscript. Comments can range from simple typographical mistakes to the proposal of several additional control experiments that must be included. Typically, the reviewers decide whether the manuscript should be accepted as is, provisionally accepted with minor corrections, provisionally accepted with major corrections, or rejected.

The editor’s collect all the reviewer’s comments for the authors and then make a final decision. It’s not unheard of for an editor to go against a reviewer, or to offer their own interpretation of the manuscript to the authors.

If a manuscript is invited for resubmission, the researchers will get a chance to look over the comments, address the concerns, and resubmit (with no guarantee of acceptance). The manuscript’s authors will write a rebuttal to the reviewers if needed, and occasionally a manuscript will bounce back between editor and author a few times. Depending on the journal, the specific guidelines, and the corrections needed, it can take anywhere from a few months to years for a research article to be published.

This dialog is arguably the most integral and important aspect in science communication. It’s essential, really, but the important point to keep in mind is that it’s not infallible. Mistakes are made and it’s not until results are reproduced in other independent labs that important findings are taken as truth in a field. Often, disagreements between laboratories and individuals can arise.

This social discord is a healthy part of the process. For example, in 2011 Science published a report that DNA, the blue print of life, could incorporate the element arsenic into the ‘backbone’ of its structure. It’s fact that DNA’s backbone contains phosphorus and by showing that arsenic could be used in its stead, the authors of this paper argued this was proof that life could have evolved elsewhere in the universe using different starting elements and molecules.

The news sent ripples throughout the scientific community. There were many skeptics and ultimately it was shown that the data could not be reproduced outside of the publishing laboratory. After debate in the field, and multiple attempts by various labs to replicate the results, the results were shown to be nothing more than anomalies. Skeptics of science will note that because of these discrepancies, science can’t be trusted. But independent validation of research results is an integral and important aspect of peer review and in these cases is called post-publication peer review. The editors and peer reviewers at each journal get the data as presentable as possible; the rest is up to the community at large.

However, breakdowns in this entire process do occur and can lead to publication of erroneous data (unfortunately, at times, due to fraud and bias by the publishing authors), which means it very important that the scientific community critically evaluate published work. A landmark paper in 2005, written by Professor John Ioannidis at Stanford University, presented the argument that a large portion of publish research is irreproducible. This set off a flurry of introspection within the scientific community to address the growing problem that some published results can’t be replicated in independent laboratories, just like the arsenic paper. There are even watchdog groups that publicly catalog retractions of journal articles that can’t be reproduced or which contain errors.

That’s not to say that scientists are willfully publishing bad data. Far from it. Papers are retracted for a variety of reasons, including for innocuous errors in experimental design or data collection. So, it’s a very good thing the scientific community polices itself and makes this known to the rest of the world.

But there is no doubt science is facing a reproducibility crisis. Major contributors to this problem are lack of appropriate use of statistics, lack of specific detail in the methods section, and lax peer review at some journals. Journals like Nature have used crisis as an opportunity to shore up their publication and review procedures, including asking for independent validation of key experiments before publication, verification of reagents and chemicals, and open access to all of the data.

Open access is a vital component for peer review and validation. Open access means that all the raw data discussed in a manuscript is provided online for anyone to access it, including you! Anyone in the world can access the data and use it for validation and repeat important experiments.

This type of validation is another aspect of post-publication peer review and it helps identify those papers that need more scrutiny. Journals such as Scientific Reports and PLOS One are entirely open access with their publications and free for anyone, anytime, to download and view.

Some researchers even use a newer innovation called pre-publication peer review. Typically, most scientists can get input on their work before it is published by presenting their data at conferences. However, draft manuscripts can be submitted and published online at places like the bioRxiv, with the hope that people provide critical commentary as the manuscript is prepared for publication elsewhere.

Taking this a step further, Nature Human Behavior published a manifesto on how to improve publication bias, reproducibility, and transparency. Nature Human Behavior has also recently established a new type of peer review process for their registered reports. Researchers can begin the publication process with this journal before any experiments are performed. The research design, methods, and introduction of a manuscript are written before any experiments and are peer reviewed by the journal’s editors and external reviewers. Then, and only then, is the experiment performed, the results analyzed and examined, and the rest of the paper written. The entire manuscript is put under peer review again to check for adherence and then published regardless of the findings, positive or negative.

In this way, Nature Human Behavior hopes to reduce experimental bias and increase reproducibility, all using the standard peer review process. I expect to see more innovations like this adopted by more journals in the coming years.

To wrap up, I hope this has helped clarify a little about what peer review really is and why it’s so important. Coming up in the coming weeks and months, we’ll explore a variety of topics and findings in science and all of it will have been examined by peer review.

 

 

Don`t copy text!

Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home/cresspace/public_html/wp-includes/functions.php on line 5427

Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home/cresspace/public_html/wp-content/plugins/really-simple-ssl/class-mixed-content-fixer.php on line 107