We asked over 1,200 American adults to read the biography and a single Facebook post of a (fictional) climate scientist named Dr. Dave Wilson. In this post, Dr. Wilson promotes his recent interview regarding his work on climate change. We varied the message of this statement to include a range of advocacy messages – from no advocacy (discussing recent evidence about climate change) to clear advocacy for specific policies to address climate change.
When Dr. Wilson championed taking action on climate change, without specifying what action, he was considered equally credible as when he described new evidence on climate change or discussed the risks and benefits of a range of policies. In fact, perceptions of Dr. Wilson’s credibility were maintained even when he argued in favor of reducing carbon emissions at coal-fired power plants.
Only when Dr. Wilson advocated for building more nuclear power plants did his credibility suffer.
Advocacy received differently than partisanship
Our study suggests that the American public may not see scientists who advocate for general action on scientific issues as lacking in credibility, nor will they punish the scientific community for one scientist’s advocacy. Yet this study represented only one case of scientific advocacy; other forms of advocacy may not be as accepted by the public. For example, more caution is required when scientists promote specific (unpopular) policies.
Most notably, our study did not test overtly partisan statements from Dr. Wilson. Our research participants saw it that way too; they rated all of Dr. Wilson’s statements as more scientific than political.
The March for Science, however, risks being seen as motivated by partisan beliefs. In that case, scientists may not escape being criticized for their actions. This is especially true if the march is seen as a protest against President Trump or Republicans. In our study, conservatives saw Dr. Wilson as less credible whether he engaged in advocacy or not. If conservatives see the march as a protest against their values, they may dismiss the message of the march – and the messengers – without considering its merits.
This risk is exacerbated when media coverage of the March for Science is considered. In our study, people saw Dr. Wilson promoting his interview in his Facebook post, but were not exposed to the actual interview in which Dr. Wilson made his case for a given policy. Nor were his actions disruptive; a single post on social media is relatively easy to skip or ignore, and Dr. Wilson could frame his interview in the way he liked.
The March for Science will be the opposite. If successful, the march will garner attention from news outlets, who may reframe the purpose of the march.
Balancing the advocacy message
So what can be done to limit accusations of partisan bias surrounding the march?
One way marchers can minimize this possibility is by crafting an inclusive message that resonates with many people, stressing the ways science improves our society and protects future generations. However, the march’s similarity to other explicitly anti-Trump marches may make it hard to avoid a partisan connotation.
Moreover, in our research Dr. Wilson was portrayed as an older white male, matching cultural stereotypes about scientists; he may have had more freedom to engage in advocacy than would female or nonwhite scientists. An inclusive and diverse March for Science may challenge these traditional portrayals of scientists. While many (the authors included) would see that as a desirable objective in itself, it may complicate successful advocacy.
A goal of the March for Science is to demonstrate that science is a nonpartisan issue. It represents a unique opportunity for scientists to highlight the ways in which science improves our society. Scientists participating in the march should emphasize shared values with those who might otherwise disagree – such as the desire to create a better world for our children and grandchildren.
If the event remains a March for Science, rather than a march against a party or group, the chances increase that it will effectively focus attention on the importance of scientific research.
It’s NCAA basketball tournament season, known for its magical moments and the “March Madness” it can produce. Many fans remember Stephen Curry’s superhuman 2008 performance where he led underdog Davidson College to victory while nearly outscoring the entire determined Gonzaga team by himself in the second half. Was Curry’s magic merely a product of his skill, the match-ups and random luck, or was there something special within him that day?
Nearly every basketball player, coach or fan believes that some shooters have an uncanny tendency to experience the hot hand – also referred to as being “on fire,” “in the zone,” “in rhythm” or “unconscious.” The idea is that on occasion these players enter into a special state in which their ability to make shots is noticeably better than usual. When people see a streak, like Craig Hodges hitting 19 3-pointers in a row, or other exceptional performances, they typically attribute it to the hot hand.
The hot hand makes intuitive sense. For instance, you can probably recall a situation, in sports or otherwise, in which you felt like you had momentum on your side – your body was in sync, your mind was focused and you were in a confident mood. In these moments of flow success feels inevitable, and effortless.
However, if you go to the NCAA’s website, you’ll read that this intuition is incorrect – the hot hand does not exist. Belief in the hot hand is just a delusion that occurs because we as humans have a predisposition to see patterns in randomness; we see streakiness even though shooting data are essentially random. Indeed, this view has been held for the past 30 years among scientists who study judgment and decision-making. Even Nobel Prize winner Daniel Kahneman affirmed this consensus: “The hot hand is a massive and widespread cognitive illusion.”
Nevertheless, recent work has uncovered critical flaws in the research which underlies this consensus. In fact, these flaws are sufficient to not only invalidate the most compelling evidence against the hot hand, but even to vindicate the belief in streakiness.
Research made it the ‘hot hand fallacy’
In the landmark 1985 paper “The hot hand in basketball: On the misperception of random sequences,” psychologists Thomas Gilovich, Robert Vallone and Amos Tversky (GVT, for short) found that when studying basketball shooting data, the sequences of makes and misses are indistinguishable from the sequences of heads and tails one would expect to see from flipping a coin repeatedly.
Just as a gambler will get an occasional streak when flipping a coin, a basketball player will produce an occasional streak when shooting the ball. GVT concluded that the hot hand is a “cognitive illusion”; people’s tendency to detect patterns in randomness, to see perfectly typical streaks as atypical, led them to believe in an illusory hot hand.
Importantly, GVT found that professional practitioners (players and coaches) not only were victims of the fallacy, but that their belief in the hot hand was stubbornly fixed. The power of GVT’s result had a profound influence on how psychologists and economists think about decision-making in domains where information arrives over time. As GVT’s result was extrapolated into areas outside of basketball, the hot hand fallacy became a cultural meme. From financial investing to video gaming, the notion that momentum could exist in human performance came to be viewed as incorrect by default.
The pedantic “No, actually” commentators were given a license to throw cold water on the hot hand believers.
Taking another look at the probabilities
In what turns out to be an ironic twist, we’ve recently found this consensus view rests on a subtle – but crucial – misconception regarding the behavior of random sequences. In GVT’s critical test of hot hand shooting conducted on the Cornell University basketball team, they examined whether players shot better when on a streak of hits than when on a streak of misses. In this intuitive test, players’ field goal percentages were not markedly greater after streaks of makes than after streaks of misses.
GVT made the implicit assumption that the pattern they observed from the Cornell shooters is what you would expect to see if each player’s sequence of 100 shot outcomes were determined by coin flips. That is, the percentage of heads should be similar for the flips that follow streaks of heads, and the flips that follow streaks of misses.
Our surprising finding is that this appealing intuition is incorrect. For example, imagine flipping a coin 100 times and then collecting all the flips in which the preceding three flips are heads. While one would intuitively expect that the percentage of heads on these flips would be 50 percent, instead, it’s less.
Suppose a researcher looks at the data from a sequence of 100 coin flips, collects all the flips for which the previous three flips are heads and inspects one of these flips. To visualize this, imagine the researcher taking these collected flips, putting them in a bucket and choosing one at random. The chance the chosen flip is a heads – equal to the percentage of heads in the bucket – we claim is less than 50 percent.
The percentage of heads on the flips that follow a streak of three heads can be viewed as the chance of choosing heads from a bucket consisting of all the flips that follow a streak of three heads. Miller and Sanjurjo, CC BY-ND
To see this, let’s say the researcher happens to choose flip 42 from the bucket. Now it’s true that if the researcher were to inspect flip 42 before examining the sequence, then the chance of it being heads would be exactly 50/50, as we intuitively expect. But the researcher looked at the sequence first, and collected flip 42 because it was one of the flips for which the previous three flips were heads. Why does this make it more likely that flip 42 would be tails rather than a heads?
Why tails is more likely when choosing a flip from the bucket. Miller and Sanjurjo, CC BY-ND
If flip 42 were heads, then flips 39, 40, 41 and 42 would be HHHH. This would mean that flip 43 would also follow three heads, and the researcher could have chosen flip 43 rather than flip 42 (but didn’t). If flip 42 were tails, then flips 39 through 42 would be HHHT, and the researcher would be restricted from choosing flip 43 (or 44, or 45). This implies that in the world in which flip 42 is tails (HHHT) flip 42 is more likely to be chosen as there are (on average) fewer eligible flips in the sequence from which to choose than in the world in which flip 42 is heads (HHHH).
This reasoning holds for any flip the researcher might choose from the bucket (unless it happens to be the final flip of the sequence). The world HHHT, in which the researcher has fewer eligible flips besides the chosen flip, restricts his choice more than world HHHH, and makes him more likely to choose the flip that he chose. This makes world HHHT more likely, and consequentially makes tails more likely than heads on the chosen flip.
In other words, selecting which part of the data to analyze based on information regarding where streaks are located within the data, restricts your choice, and changes the odds.
With this counterintuitive new finding in mind, let’s now go back to the GVT data. GVT divided shots into those that followed streaks of three (or more) makes, and streaks of three (or more) misses, and compared field goal percentages across these categories. Because of the surprising bias we discovered, their finding of only a negligibly higher field goal percentage for shots following a streak of makes (three percentage points), was, if you do the calculation, actually 11 percentage points higher than one would expect from a coin flip!
An 11 percentage point relative boost in shooting when on a hit-streak is not negligible. In fact, it is roughly equal to the difference in field goal percentage between the average and the very best 3-point shooter in the NBA. Thus, in contrast with what was originally found, GVT’s data reveal a substantial, and statistically significant, hot hand effect.
Nanomachines are tiny molecules – more than 10,000 lined up side by side would be narrower than the diameter of a human hair – that can move when they receive an external stimulus. They can already deliver medication within a body and serve as computer memories at the microscopic level. But as machines go, they haven’t been able to do much physical work – until now.
My lab has used nano-sized building blocks to design a smart material that can perform work at a macroscopic scale, visible to the eye. A 3-D-printed lattice cube made out of polymer can lift 15 times its own weight – the equivalent of a human being lifting a car.
Rotaxanes are one of the most widely investigated of these molecules. These dumbbell-shaped molecules are capable of converting input energy – in the forms of light, heat or altered pH – into molecular movements. That’s how these kinds of molecular structures got the nickname “nanomachines.”
For example, in a molecule called rotaxane, composed of one ring on an axle, the ring can move along the axle to perform shuttling motions.
Left, a rotaxane. The ring can shuttle along the axle. Right, representation of billions of rotaxanes in solution. The motions of nano-rings counteract macroscopically. Chenfeng Ke, CC BY-ND
So far, harnessing the mechanical work of rotaxanes has been very challenging. When billions of these tiny machines are randomly oriented, the ring motions will cancel each other out, producing no useful work at a macroscale. In order to harness these molecular motions, scientists have to think about controlling their three-dimensional arrangement as well as synchronizing their motions.
Molecular beads on a string
Our design is based on a well-investigated family of molecules called polyrotaxanes. These have multiple rings on a molecular axle. In our new material, the ring is a cyclic sugar and the axle is a polymer.
If we provide an external stimulus – like adding water – these rings randomly shuttling back and forth can instead stick to each other and form a tubular array. When that happens, it changes the stiffness of the molecule. It’s like when beads are threaded onto a string; many beads slid together make the string much stronger, like a rod.
Cartoon presentation of a polyrotaxane. The rings are changed from the shuttling state, left, to the stationary state, right. Chenfeng Ke, CC BY-ND
Our approach is to build a polymer system where billions of these molecules become stronger with added water. The strength of the whole architecture is increased and the structure can perform useful work.
In this way, we were able to get around the original problem of the random orientation of many nanomachines together. The addition of water locks them into a stationary state, therefore strengthening the whole 3-D architecture and allowing the united molecules to perform work together.
3-D printing the material
Our research is the first to add 3-D printability to mechanically interlocked molecules. It was integrating the 3-D printing technique that allowed us to transform the random shuttling motions of nano-sized rings into smart materials that perform work at macroscopic scale.
Getting the molecules all lined up in the right orientation is a way to amplify their motions. When we add water, the rings of the polyrotaxanes stick together via hydrogen bonds. The tubular arrays then stack together in a more ordered manner.
It’s much easier to get the molecules coordinated while they’re in this configuration as opposed to when the rings are all freely moving along the axle. We were able to successfully print lattice-like 3-D structures with the rings locked into position in this way. Now the molecules aren’t just randomly positioned within the material.
After 3-D-printing out the polymer, we used a photo-curing process – similar to the UV lamp that hardens nail polish at a salon – to cure it. We were left with a material that had good 3-D structural integrity and mechanical stability. Now it was ready to do some work.
Shape changing back and forth
The three-dimensional geometry of the polymer is crucial for its shape changing. A hollow structure is easier to deform than a solid one. So we designed a lattice cube structure to maximize its shape-deformation ability and, in turn, its ability to do work as it switched back and forth from one state to the other.
The next important step was being able to control the work our polymer could do.
It turns out the complex 3-D architecture of these structures can be reversibly deformed and reformed. We were able to use a solvent to switch the threaded ring structure between random shuttling and stationary states at the molecular level. Exchanging the solvent let us easily repeat this shape-changing and recovery behavior many times.
Squirting in solvent adds chemical energy to our polymer. As the solvent evaporated over time, the polyrotaxane returned to its original form.
This is how we converted chemical energy into mechanical work.
Just like moving beads to strengthen or weaken a string, this shape-changing is critical because it allows the amplification of molecular motion into macroscopic motion.
A 3-D printed lattice cube made of this smart material lifted a small coin 1.6 millimeters. The numbers may sound small for our day-to-day world, but this is a big step forward in the effort to get nanomachines doing macro work.
We hope this advance will enable scientists to further develop smart materials and devices. For example, by adding contraction and twisting to the rising motion, molecular machines could be used as soft robots performing complicated tasks similar to what a human hand can do.
Here’s a math problem even the brightest school districts struggle to solve: getting hordes of elementary, middle and high school students onto buses and to school on time every day.
Transporting all of these pupils presents a large and complex problem. Some school districts use existing software systems to develop their bus routes. Others still develop these routes manually.
In such problems, improving operational efficiency even a little could result in great advantages. Each school bus costs school districts somewhere between US$60,000 and $100,000. So, scheduling the buses more efficiently will result in significant monetary savings.
Over the past year, we have been working with the Howard County Public School System (HCPSS) in Maryland to analyze its transportation system and recommend ways to improve it. We have developed a way to optimize school bus routes, thanks to new mathematical models.
Finding the optimal solution to this problem is very valuable, even if that optimal solution is only slightly better than the current plan. A solution that is only one percent worse would require a considerable number of additional buses due to the size of the operation.
By optimizing bus routes, schools can cut down on costs, while still serving all of the children in their district. Our analysis shows that HCPSS can save between five and seven percent on the number of buses needed.
A bus trip in the afternoon starts from a given school and visits a sequence of stops, dropping off students until the bus is empty. A route is a sequence of trips from different schools that are linked together to be served by one bus.
Our goal was to reduce both the total time buses run without students on board – also known as aggregate deadhead time – as well as the number of routes. Fewer routes require fewer buses since each route is assigned to a single bus. Our approach uses data analysis and mathematical modeling to find the optimal solution in a relatively short time.
To solve this problem, a computer algorithm considers all of the bus trips in the district. Without modifying the trips, the algorithm assigns them to routes such that the aggregate deadhead time and the number of routes are minimized. Individual routes become longer, allowing the bus to serve more trips in a single route.
Since the trips are fixed, in this way we can decrease the total time the buses are en route. Minimizing the deadhead travel results in cost savings and reductions in air pollution.
The routes that we generated can be viewed as a lower bound to the number of buses needed by school districts. We can find the optimal solution for HCPSS in less than a minute.
Serving all students
While we were working on routes, we decided to also tackle the problem of the bus trips themselves. To do this, we needed to determine what trips are required to serve the students for each school in the system, given bus capacities, stop locations and the number of students at each stop. This has a direct impact on how routes are chosen.
Most existing models aim to minimize either the total travel time or the total number of trips. The belief in such cases is that, by minimizing the number of trips, you can minimize the number of buses needed overall.
However, our work shows that this is not always the case. We found a way to cut down on the number of buses needed to satisfy transportation demands, without trying to minimize either of the above two objectives. Our approach considers not only minimizing the number of trips but also how these trips can be linked together.
New start times
Last October, we presented our work at the Maryland Association of Pupil Transportation conference. An audience member at that conference suggested that we analyze school start and dismissal times. By changing the high school, middle school and elementary school start times, bus operations could potentially be even more efficient. Slight changes in school start times can make it possible to link more trips together in a single bus route, hence decreasing the number of buses needed overall.
We developed a model that optimizes the school bell times, given that each of the elementary, middle and high school start times fall within a prespecified time window. For example, the time window for elementary school start times would be from 8:15 to 9:25 a.m.; for middle schools, from 7:40 to 8:30 a.m.; and all high schools would start at 7:25 a.m.
Our model looks at all of the bus trips and searches for the optimal combination of school dismissal time such that the number of school buses, which is the major contributing factor to costs, is minimized. We found that, in most cases, optimizing the bell times results in significant savings regarding the number of buses.
Using our model, we ran many different “what if?” scenarios using different school start and dismissal times for the HCPSS. Four of these are currently under consideration by the Howard County School Board for possible implementation.
We are also continuing to enhance our current school bus transportation models, as well developing new ways to further improve efficiency and reduce costs.
For example, we are building models that can help schools select the right vendors for their transportation needs, as well as minimize the number of hours that buses run per day.
In the future, the type of models we are working on could be bundled into a software system that schools can use by themselves. There is really no impediment in using these types of systems as long as the school systems have an electronic database of their stops, trips, and routes.
Such software could potentially be implemented in all school districts in the nation. Many of these districts would benefit from using such models to evaluate their current operations and determine if any savings can be realized. With many municipalities struggling with budgets, this sort of innovation could save money without degrading service.
Want to fly to the moon? Well, now you won’t have to bother with all those years of rigorous astronaut training – all you need is a huge wad of cash. Elon Musk, technopreneur, has built a small spaceship called Dragon and if you slap down enough money – maybe a hundred million dollars or so – he’ll fly you to the Moon.
The first flight is set for 2018, a target so ambitious it verges on the incredible.
This ambivalence isn’t surprising really, since history shows that soon after the Apollo 11 moon landing in 1969, people switched their televisions to more down-to-earth events while wondering why NASA kept going back to the Moon again and again with Apollo 12, then Apollo 13, then Apollo 14 – all the way up to Apollo 17.
And even before SpaceX had delivered anything, NASA made a massive investment in the firm to get it up and running. Any claim that SpaceX is purely a commercial business, then, is also incredible.
Like many space fans, Musk will tell you that this moonshot is the first step in the “natural process” of human space expansion. The next steps involve the colonization of the Moon and then Mars.
But space travel is not a natural process; it’s a social process involving domestic politics, international competition, the marketing of patriotic heroism, and the divvying up of state funds.
Harkening back to the dark past
The “colonization” theme of space expansion is also problematic since it signifies a potential re-emergence of the social injustices and environmental disasters wrought by past colonial ventures. Being a fan of “space colonization”, then, can be likened to rejoicing in the displacement of native peoples and celebrating the destruction of wilderness.
Space fans might argue that there are no people in space to be colonized, that the Moon and Mars are uninhabited lands. But the plan to settle Mars, for example, and then to set about extracting valuable resources without working out if some alien species is living there – even if those life forms are microbial – seems reckless.
It also smacks of anthropocentrism since humans will doubtless carry to Mars the attitude that microbes are lower lifeforms and that it’s OK to stomp all over their planet spreading pollution and mucking up their environment.
Even if they are lifeless, we should consider that the Moon and Mars belong to all of us; they are the common heritage of humankind. And those who first to get to the Moon or to Mars shouldn’t be permitted to plunder these worlds just for the sake of their own adventure or profit.
Trump met Elon Musk within days of assuming the presidency and, with their shared love of capitalism and penchant for self-promotion, they seem to be entering a working relationship, described by some as cronyism.
But perhaps it’s too soon to worry about Moon grabs or Martian colonialism.
First, both Trump and Musk are notorious “big talkers” and they may be playing with the macho spectacle of space travel. If their space plans gurgle into an economic sinkhole, they’ll probably quietly abandon them.
And the 2018 moonshot is not going to actually land on the Moon; it’s merely going to shoot around it and then head back to Earth. Nobody will get the chance to plant a flag.
Space tourism, moon bases and Martian colonies have all been predicted for decades and nothing has ever come of them. Wernher von Braun, the Apollo rocket hero (and ex-Nazi) showcased such prospective space endeavors on a television show with Walt Disney in the 1950s (using whizzing Disney graphics). But 70 years later, a space colony is nowhere to be found.
If Musk does get his rich clients to circle the Moon next year, and then manages to set up bases and colonies on the lunar surface and then Mars, it won’t be because he’s made a business success out of space expansion. And it won’t be due to the scientific merit of moon bases.
It’s possible the cosmos will be diminished and despoiled too with mining firms digging up the moonscape, rocket fuel spilled all over the Martian surface, and neon lights flashing in shiny space casinos.
Of course, some space fans believe the only way they’ll realize their space fantasies is to ride behind the glory of “visionaries” such as Musk – and the unknown mega-rich space passengers set to shoot off around the Moon next year.
The April 22 March for Science, like the Women’s March before it, will confront United States President Donald Trump on his home turf – this time to challenge his stance on climate change and vaccinations, among other controversial scientific issues.
But not everyone who supports scientific research and evidence-based policymaking is on board. Some fear that a scientists’ march will reinforce the sceptical conservative narrative that scientists have become an interest group whose findings are politicised. Others are concerned that the march is more about identity politics than science.
From my perspective, the march – which is being planned by the Earth Day Network, League of Extraordinary Scientists and Engineers and the Natural History Museum, among other partner organisations – is a distraction from the existential problems facing the field.
Other questions are far more urgent to restoring society’s faith and hope in science. What is scientists’ responsibility for current anti-elite resentments? Does science contribute to inequality by providing evidence only to those who can pay for it? How do we fix the present crisis in research reproducibility?
So is the march a good idea? To answer this question, we must turn to the scientist and philosopher Micheal Polanyi, whose concept of science as a body politic underpins the logic of the protest.
Both the appeal and the danger of the March for Science lie in its demand that scientists present themselves as a single collective just as Polanyi did in his Cold War classic, The Republic of Science: Its Political and Economic Theory. In it, Polanyi defended the importance of scientific contributions to improving Western society in contrast to the Soviet Union’s model of government-controlled research.
Polanyi was a polymath, that rare combination of natural and social scientist. He passionately defended science from central planning and political interests, including by insisting that science depends on personal, tacit, elusive and unpredictable judgements – that is, on the individual’s decision on whether to accept or reject a scientific claim. Polanyi was so radically dedicated to academic freedom that he feared undermining it would make scientific truth impossible and lead to totalitarianism.
The scientists’ march on Washington inevitably invokes Polanyi. It is inspired by his belief in an open society – one characterised by a flexible structure, freedom of belief and the wide spread of information.
A market for good and services
But does Polanyi’s case make sense in the current era?
Polanyi recognised that Western science is, ultimately, a capitalist system. Like any market of goods and services, science comprises individual agents operating independently to achieve a collective good, guided by an invisible hand.
Scientists thus undertake research not to further human knowledge but to satisfy their own urges and curiosity, just as in Adam Smith’s example the baker makes the bread not out of sympathy for the hunger of mankind but to make a living. In both cases this results in a common good.
There is a difference between bakers and scientists, though. For Polanyi:
It appears, at first sight, that I have assimilated the pursuit of science to the market. But the emphasis should be in the opposite direction. The self coordination of independent scientists embodies a higher principle, a principle which is reduced to the mechanism of the market when applied to the production and distribution of material goods.
Gone the ‘Republic of Science’
Polanyi was aligning science with the economic model of the 1960s. But today his assumptions, both about the market and about science itself, are problematic. And so, too, is the scientists’ march on the US capital, for adopting the same vision of a highly principled science.
Does the market actually work as Adam Smith said? That’s questionable in the current times: economists George Akerlof and Robert Shiller have argued that the principle of the invisible hand now needs revisiting. To survive in our consumerist society, every player must exploit the market by any possible means, including by taking advantage of consumer weaknesses.
To wit, companies market food with unhealthy ingredients because they attract consumers; selling a healthy version would drive them out of the market. As one scientist remarked to The Economist, “There is no cost to getting things wrong. The cost is not getting them published”.
Polanyi also believed in a “Republic of Science” in which astronomers, physicists, biologists, and the like constituted a “Society of Explorers”. In their quest for their own intellectual satisfaction, scientists help society to achieve the goal of “self-improvement”.
That vision is difficult to recognise now. Evidence is used to promote political agendas and raise profits. More worryingly, the entire evidence-based policy paradigm is flawed by a power asymmetry: those with the deepest pockets command the largest and most advertised evidence.
A third victim of present times is the idea – central to Polanyi’s argument for a Republic of Science – that scientists are capable of keeping their house in order. In the 1960s, scientists still worked in interconnected communities of practice; they knew each other personally. For Polanyi, the overlap among different scientific fields allowed scientists to “exercise a sound critical judgement between disciplines”, ensuring self-governance and accountability.
Today, science is driven by fierce competition and complex technologies. Who can read or even begin to understand the two million scientific articles published each year?
Elijah Millgram calls this phenomenon the “New Endarkment” (the opposite of enlightenment), in which scientists have been transformed into veritable “methodological aliens” to one another.
The classic vision of science providing society with truth, power and legitimacy is a master narrative whose time has expired. The Washington March for Science organisers have failed to account for the fact that science has devolved into what Polanyi feared: it’s an engine for growth and profit.
A march suggests that the biggest problem facing science today is a post-truth White House. But that is an easy let off. Science’s true predicaments existed before January 2 2017, and they will outlive this administration.
Our activism would be better inspired by the radical 1970s-era movements that sought to change the world by changing first science itself. They sought to provide scientific knowledge and technical expertise to local populations and minority communities while giving those same groups a chance to shape the questions asked of science. These movements fizzled out in the 1990s but echoes of their programmatic stance can be found in a recent editorial in Nature.
Over 3,770m years ago, the Earth looked very different. There were no plants, no animals, the sky was not blue. The surface would have resembled a bare rocky wasteland.
Yet it was around this time that we think the first life appeared, deep in the ocean around hot fissures in the seabed known as hydro-thermal vents. Here, hot fluids circulate through the rocks on the seafloor, carrying iron and other elements out of the rocks and into the surrounding water. The chemicals and energy in these environments make them look like the perfect place for life to start.
To test this theory, my colleagues and I studied an ancient group of rocks in north-east Canada called the Nuvvuagittuq belt, dated to be between 4,280m and 3,770m years old. Preserved within this belt are iron formations formed in settings analogous to hydro-thermal vents today. And in it we found micro-fossils that we believe to be 300m years older than the previous oldest known microfossils from rocks in western Australia dated to be around 3,500m years old. That makes these the oldest known fossils and possibly the oldest known evidence for life on Earth.
To uncover the fossils, we cut slices of the rocks so thin you could see through them and study them with a microscope. In doing so, we found microscopic filaments and tubes of iron, ranging in size from 5-10 microns in diameter, less than half the width of human hair, and up to half a millimeter in length. The tubes and filaments we saw were very detailed features that shared remarkable similarities with fossils of microbes in younger rocks and also modern microbes.
Features of these ancient filaments, such as their attachment to clumps of iron, are similar to those found in modern microbes, which use these clumps to hold themselves to rocks. These iron-oxidizing microbes trap iron coming out of underwater vents, which they use in a reaction to release chemical energy. They then use this energy to turn carbon dioxide from the surrounding water into organic matter, allowing them to grow.
How did we know they were fossils?
When we found the fossil structures we knew they were very interesting and promising candidates for micro-fossils. But we needed to demonstrate that this is what they really were and that they weren’t a non-biological phenomena. So we assessed all the likely scenarios that could have formed the tubes and filaments, including chemical gradients in iron-rich gels and metamorphic stretching of the rocks. None of the mechanisms fitted with the observations we had made.
We then looked for chemical traces in the rocks that might have been left behind by microorganisms. We found organic matter preserved as graphite in a way that suggested it had been formed by microbes. We also found key minerals that are commonly produced by the decay of biological materials in sediments, such as carbonate and apatite (which contains phosphorus). These minerals also occurred in granule structures that commonly form in sediments around decaying organisms and sometimes preserve micro-fossil structures within them. All of these independent observations provided strong evidence for the micro structures’ biological origin.
Together, this evidence very clearly demonstrates a strong biological presence in the 3,770m- to 4,280m-year-old rocks, pushing back the date of the earliest known micro-fossils by 300m years. To put that timescale into perspective, if we went back in time 300m years from today, the dinosaurs would not yet have even come into existence.
The fact we found these lifeforms in hydro-thermal vent deposits from so early in Earth’s history supports the long-standing theory that life arose in these types of environments. The environment that we found these ancient micro-fossils in, and their similarity to younger fossilized and modern bacteria, suggests that their iron-based metabolisms were among the first ways life sustained itself on Earth.
It’s also worth remembering that this discovery shows us life managed to take hold and rapidly evolve on Earth at a time when Mars had liquid water on its surface. This leaves us with the exciting possibility that if the conditions on the Martian surface and Earth were similar, life should also have begun on Mars over 3,770m years ago. Or else the Earth may have just been a special exception.