Age and dating with phylogenetics

Timing the phylogeny

Understanding the evolutionary history of species can be a complicated matter, both from theoretical and analytical perspectives. Although phylogenetics addresses many questions about evolutionary history, there are a number of limitations we need to consider in our interpretations.

One of these limitations we often want to explore in better detail is the estimation of the divergence times within the phylogeny; we want to know exactly when two evolutionary lineages (be they genera, species or populations) separated from one another. This is particularly important if we want to relate these divergences to Earth history and environmental factors to better understand the driving forces behind evolution and speciation. A traditional phylogenetic tree, however, won’t show this: the tree is scaled in terms of the genetic differences between the different samples in the tree. The rate of genetic differentiation is not always a linear relationship with time and definitely doesn’t appear to be universal.

 

Anatomy of phylogenies.jpg
The general anatomy of a phylogenetic tree. A phylogeny describes the relationships of tips (i.e. which are more closely related than others; referred to as the topology), how different these tips are (the length of the branches) and the order they separated in time (separations shown by the nodes). Different trees can share some traits but not others: the red box shows two phylogenetic trees with similar branch lengths (all of the branches are roughly the same) but different topology (the tips connect differently: A and B are together on the left but not on the right, for example). Conversely, two trees can have the same topology, but show differing lengths in the branches of the same tree (blue box). Note that the tips are all in the same positions in these two trees. Typically, it’s easier to read a tree from right to left: the two tips who have branches that meet first are most similar genetically; the longer it takes for two tips to meet along the branches, the less similar they are genetically.

How do we do it?

The parameters

There are a number of parameters that are required for estimating divergence times from a phylogenetic tree. These can be summarised into two distinct categories: the tree model and the substitution model.

The first one of these is relatively easy to explain; it describes the exact relationship of the different samples in our dataset (i.e. the phylogenetic tree). Naturally, this includes the topology of the tree (which determines which divergences times can be estimated for in the first place). However, there is another very important factor in the process: the lengths of the branches within the phylogenetic tree. Branch lengths are related to the amount of genetic differentiation between the different tips of the tree. The longer the branch, the more genetic differentiation that must have accumulated (and usually also meaning that longer time has occurred from one end of the branch to the other). Even two phylogenetic trees with identical topology can give very different results if they vary in their branch lengths (see the above Figure).

The second category determines how likely mutations are between one particular type of nucleotide and another. While the details of this can get very convoluted, it essentially determines how quickly we expect certain mutations to accumulate over time, which will inevitably alter our predictions of how much time has passed along any given branch of the tree.

Calibrating the tree

However, at least one another important component is necessary to turn divergence time estimates into absolute, objective times. An external factor with an attached date is needed to calibrate the relative branch divergences; this can be in the form of the determined mutation rate for all of the branches of the tree or by dating at least one node in the tree using additional information. These help to anchor either the mutation rate along the branches or the absolute date of at least one node in the tree (with the rest estimated relative to this point). The second method often involves placing a time constraint on a particular node of the tree based on prior information about the biogeography of the species (for example, we might know one species likely diverged from another after a mountain range formed: the age of the mountain range would be our constraints). Alternatively, we might include a fossil in the phylogeny which has been radiocarbon dated and place an absolute age on that instead.

Ammonite comic.jpg
Don’t you know it’s rude to ask an ammomite her age?

In regards to the former method, mutation rates describe how fast genetic differentiation accumulates as evolution occurs along the branch. Although mutations gradually accumulate over time, the rate at which they occur can depend on a variety of factors (even including the environment of the organism). Even within the genome of a single organism, there can be variation in the mutation rate: genes, for example, often gain mutations slower than non-coding region.

Although mutation rates (generally in the form of a ‘molecular clock’) have been traditionally used in smaller datasets (e.g. for mitochondrial DNA), there are inherent issues with its assumptions. One is that this rate will apply to all branches in a tree equally, when different branches may have different rates between them. Second, different parts of the genome (even within the same individual) will have different evolutionary rates (like genes vs. non-coding regions). Thus, we tend to prefer using calibrations from fossil data or based on biogeographic patterns (such as the time a barrier likely split two branches based on geological or climatic data).

The analytical framework

All of these components are combined into various analytical frameworks or programs, each of which handle the data in different ways. Many of these are Bayesian model-based analysis, which in short generates hypothetical models of evolutionary history and divergence times for the phylogeny and tests how well it fits the data provided (i.e. the phylogenetic tree). The algorithm then alters some aspect(s) of the model and tests whether this fits the data better than the previous model and repeats this for potentially millions of simulations to get the best model. Although models are typically a simplification of reality, they are a much more tractable approach to estimating divergence times (as well as a number of other types of evolutionary genetics analyses which incorporating modelling).

Molecular dating pipeline
A (believe it or not, simplified) pipeline for estimating divergence times from a phylogeny. 1) We obtain our DNA sequences for our samples: in this example, we’ll see each Sample (A-E) is a representative of a single species. We align these together to make sure we’re comparing the same part of the genome across all of them. 2) We estimate the phylogenetic tree for our samples/species. In a Bayesian framework, this means creating simulation models containing a certain substitution model and a given tree model (containing certain topology and branch lengths). Together, these two models form the likelihood model: we then test how well this model explains our data (i.e. the likelihood of getting the patterns in our data if this model was true). We repeat these simulations potentially hundreds of thousands of times until we pinpoint the most likely model we can get. 3) Using our resulting phylogeny, we then calibrate some parts of it based on external information. This could either be by including a carbon-dated fossil (F) within the phylogeny, or constraining the age of one node based on biogeographic information (the red circle and cross). 4) Using these calibrations as a reference, we then estimated the most likely ages of all the splits in the tree, getting our final dated phylogeny.

Despite the developments in the analytical basis of estimating divergence times in the last few decades, there are still a number of limitations inherent in the process. Many of these relate to the assumptions of the underlying model (such as the correct and accurate phylogenetic tree and the correct estimations of evolutionary rate) used to build the analysis and generate simulations. In the case of calibrations, it is also critical that they are correctly dated based on independent methods: inaccurate radiocarbon dating of a fossil, for example, could throw out all of the estimations in the entire tree. That said, these factors are intrinsic to any phylogenetic analysis and regularly considered by evolutionary biologists in the interpretations and discussions of results (such as by including confidence intervals of estimations to demonstrate accuracy).

Understanding the temporal aspects of evolution and being able to relate them to a real estimate of age is a difficult affair, but an important component of many evolutionary studies. Obtaining good estimates of the timing of divergence of populations and species through molecular dating is but one aspect in building the picture of the history of all organisms, including (and especially) humans.

The direction of selection

The nature of adaptation

One of the most fundamental aspects of natural selection and evolution is, of course, the underlying genetic traits that shape the physical, selected traits. Most commonly, this involves trying to understand how changes in the distribution and frequencies of particular genetic variants (alleles) occur in nature and what forces of natural election are shaping them. Remember that natural selection acts directly on the physical characteristics of species; if these characteristics are genetically-determined (which many are), then we can observe the flow-on effects on the genetic diversity of the target species.

Although we might expect that natural selection is a fairly predictable force, there are a myriad of ways it can shape, reduce or maintain genetic diversity and identity of populations and species. In the following examples, we’re going to assume that the mentioned traits are coded for by a single gene with two different alleles for simplicity. Thus, one allele = one version of the trait (and can be used interchangeably). With that in mind, let’s take a look at the three main broad types of changes we observe in nature.

Directional selection

Arguably the most traditional perspective of natural selection is referred to as ‘directional selection’. In this example, nature selection causes one allele to be favoured more than another, which causes it to increase dramatically in frequency compared to the alternative allele. The reverse effect (natural selection pushing against a maladaptive allele) is still covered by directional selection, except that it functions in the opposite way (the allele under negative selection has reduced frequency, shifting towards the alternative allele).

Directional selection diagram
An example of directional selection. In this instance, we have one population of cats and a single phenotypic trait (colour) which ranges from 0 (yellow) to 1 (red). Red colour is selected for above all other colours; the original population has a pretty diverse mix of colours to start. Over time, we can see the average colour of the entire population moves towards more red colours whilst yellow colours start to disappear. Note that although the final population is predominantly red, there is still some (minor) variation in colours. These changes are reflected in the distribution of the colour-coding alleles (right), as it moves towards the red end of the spectrum.

Balancing selection

Natural selection doesn’t always push allele frequencies into different directions however, and sometimes maintains the diversity of alleles in the population. This is what happens in ‘balancing selection’ (sometimes also referred to as ‘stabilising selection’). In this example, natural selection favours non-extreme allele frequencies, and pushes the distribution of allele frequencies more to the centre. This may happen if deviations from the original gene, regardless of the specific change, can have strongly negative effects on the fitness of an organism, or in genes that are most fit when there is a decent amount of variation within them in the population (such as the MHC region, which contributes to immune response). There are a couple other reasons balancing selection may occur, though.

Heterozygote advantage

One example is known as ‘heterozygote advantage’. This is when an organism with two different alleles of a particular gene has greater fitness than an organism with two identical copies of either allele. A seemingly bizarre example of heterozygote advantage is related to sickle cell anaemia in African people. Sickle cell anaemia is a serious genetic disorder which is encoded for by recessive alleles of a haemoglobin gene; thus, a person has to carry two copies of the disease allele to show damaging symptoms. While this trait would ordinarily be strongly selected against in many population, it is maintained in some African populations by the presence of malaria. This seems counterintuitive; why does the presence of one disease maintain another?

Well, it turns out that malaria is not very good at infecting sickle cells; there are a few suggested mechanisms for why but no clear single answer. Naturally, suffering from either sickle cell anaemia or malaria is unlikely to convey fitness benefits. In this circumstance, natural selection actually favours having one sickle cell anaemia allele; while being a carrier isn’t ordinarily as healthy as having no sickle cell alleles, it does actually make the person somewhat resistant to malaria. Thus, in populations where there is a selective pressure from malaria, there is a heterozygote advantage for sickle cell anaemia. For those African populations without likely exposure to malaria, sickle cell anaemia is strongly selected against and less prevalent.

Malaria and sickle diagram
A diagram of how heterozygote advantage works in sickle cell anaemia and malaria resistance. On the top we have our two main traits: the blood cell shape (which has two different alleles; normal and sickle celled) and malaria infection by mosquitoes. Blue circles indicate that the trait has good fitness, whilst red crosses indicate the trait has bad fitness. For the left hand person, having two sickle cell alleles (ss) means they are symptomatic of sickle cell anaemia and is unlikely to have a good quality of life. On the right, having two normal blood cell alleles (SS) means that he is susceptible to malaria infection. The middle person, however, having only one sickle cell allele (Ss) means they are asymptomatic but still resistant to malaria. Thus, being heterozygous for sickle cell is actually beneficial over being homozygous in either direction: this is reflected in the distribution of alleles (bottom). The left side is pushed down by sickle cell anaemia whilst the right side is pushed down by malaria, thus causing both blood cell alleles (s and S) to be maintained at an intermediate frequency (i.e. balanced). 

Frequency-dependent selection

Another form of balancing selection is called ‘frequency-dependent selection’, where the fitness of an allele is inversely proportional to its frequency. Thus, once the allele has become common due to selection, the fitness of that allele is reduced and selection will start to favour the alternative allele (which is at much lower frequency). The constant back-and-forth tipping of the selective scales results in both alleles being maintained at an equilibrium.

This can happen in a number of different ways, but often the rarer trait/allele is fundamentally more fit because of its rarity. For example, if one allele allows an individual to use a new food source, it will be very selectively fit due to the lack of competition with others. However, as that allele accumulates within the population and more individuals start to feed on that food source, the lack of ‘uniqueness’ will mean that it’s not particularly better than the original food source. A balance between the two food sources (and thus alleles) will be maintained over time as shifts towards one will make the other more fit, and natural selection will compensate.

Frequency dependent selection diagram
An example of frequency-dependent selection. The colour of the cat indicates both their genotype and their food sources: black cats eat red apples whilst green cats eat green apples (this species has apparently developed herbivory, okay?) To start with, the incredibly low frequency of green cats mean that the one green cat can exploit a huge food source compared to black cats. Because of this, natural selection favours green cats. However, in the next generation evolution overcompensates and produces way too many green cats, and now black cats are getting much more food. Natural selection bounces back to favour black cats. Eventually, this causes and equilibrium balance of the two cat types (as shifts one way will cause a shift back the other way immediately after). These changes are reflected in the overall frequency of the two types over time (top right), which eventually evens out. The bottom right figure demonstrates that for both cat types, the frequency of that colour is inversely proportional to the overall fitness (measured as a proxy by amount of food per cat).

Disruptive selection

A third category of selection (although not as frequently mentioned) is known as ‘disruptive selection’, which is essentially the direct opposite of balancing selection. In this case, both extremes of allele frequencies are favoured (e.g. 1 for one allele or 1 for the other) but intermediate frequencies are not. This can be difficult to untangle in natural populations since it could technically be attributed to two different cases of directional selection. Each allele of the same gene is directionally selected for, but in opposite populations and directions so that overall pattern shows very little intermediates.

In direct contrast to balancing selection, disruptive selection can often be a case of heterozygote disadvantage (although it’s rarely called that). In these examples, it may be that individuals which are not genetically committed to one end or the other of the frequency spectrum are maladapted since they don’t fit in anywhere. An example would be a species that occupies both the desert and a forested area, with little grassland-type habitat in the middle. For the relevant traits, strongly desert-adapted genes would be selected for in the desert and strongly forest-adapted genes would be selected for in the forest. However, the lack of gradient between the two habitats means that individuals that are half-and-half are less adaptive in both the desert and the forest. A case of jack-of-all-trades, master of none.

Disruptive selection diagram
The above example of disruptive selection. Bird colour is coded for by a single gene; green birds have a HH genotype, orange birds have a hh genotype, and yellow birds are heterozygotes (Hh). Habitats where the two homozygote colours are most adaptive are found; green birds do well in the forest whereas orange birds do well in the desert. However, there’s no intermediate habitat between the two and so yellow birds don’t really fit well anywhere; they’re outcompeted in the forest and desert by the respective other colours. This means selection favours either extreme (homozygotes), shown in the top right. If we split up the two alleles of the genotype though, we can see that this disruptive selection is really the product of two directionally selective traits working in inverse directions: H is favoured at one end and h at the other.

Direction of selection

Although it would be convenient if natural selection was entirely predictable, it often catches up by surprise in how it acts and changes species and populations in the wild. Careful analysis and understanding of the different processes and outcomes of adaptation can feed our overall understanding of evolution, and aid in at least pointing in the right direction for our predictions.

Fantastic Genes and Where to Find Them

The genetics of adaptation

Adaptation and evolution by natural selection remains one of the most significant research questions in many disciplines of biology, and this is undoubtedly true for molecular ecology. While traditional evolutionary studies have been based on the physiological aspects of organisms and how this relates to their evolution, such as how these traits improve their fitness, the genetic component of adaptation is still somewhat elusive for many species and traits.

Hunting for adaptive genes in the genome

We’ve previously looked at the two main categories of genetic variation: neutral and adaptive. Although we’ve focused predominantly on the neutral components of the genome, and the types of questions about demographic history, geographic influences and the effect of genetic drift, they cannot tell us (directly) about the process of adaptation and natural selective changes in species. To look at this area, we’d have to focus on adaptive variation instead; that is, genes (or other related genetic markers) which directly influence the ability of a species to adapt and evolve. These are directly under natural selection, either positively (‘selected for’) or negatively (‘selected against’).

Given how complex organisms, the environment and genomes can be, it can be difficult to determine exactly what is a real (i.e. strong) selective pressure, how this is influenced by the physical characteristics of the organism (the ‘phenotype’) and which genes are fundamental to the process (the ‘genotype’). Even determining the relevant genes can be difficult; how do we find the needle-like adaptive genes in a genomic haystack?

Magnifying glass figure
If only it were this easy.

There’s a variety of different methods we can use to find adaptive genetic variation, each with particular drawbacks and strengths. Many of these are based on tests of the frequency of alleles, rather than on the exact genetic changes themselves; adaptation works more often by favouring one variant over another rather than completely removing the less-adaptive variant (this would be called ‘fixation’). So measuring the frequency of different alleles is a central component of many analyses.

FST outlier tests

One of the most classical examples is called an ‘FST outlier test’. This can be a bit complicated without understanding what FST is actually measures: in short terms, it’s a statistical measure of ‘population differentiation due to genetic structure’. The FST value of one particular population can determine how genetically similar it is to another. An FST value of 1 implies that the two populations are as genetically different as they could possibly be, whilst an FST value of 0 implies that they are genetically identical populations.

Generally, FST reflects neutral genetic structure: it gives a background of how, on average, different are two populations. However, if we know what the average amount of genetic differentiation should be for a neutral DNA marker, then we would predict that adaptive markers are significantly different. This is because a gene under selection should be more directly pushed towards or away from one variant (allele) than another, and much more strongly than the neutral variation would predict. Thus, the alleles that are way more or less frequent than the average pattern we might assume are under selection. This is the basis of the FST outlier test; by comparing two or more populations (using FST), and looking at the distribution of allele frequencies, we can pick out a few alleles that vary from the average pattern and suggest that they are under selection (i.e. are adaptive).

There are a few significant drawbacks for FST outlier tests. One of the most major ones is that genetic drift can also produce a large number of outliers; in a small population, for example, one allele might be fixed (has a frequency of 1, with no alternative allele in the population) simply because there is not enough diversity or population size to sustain more alleles. Even if this particular allele was extremely detrimental, it’d still appear to be favoured by natural selection just because of drift.

Drift leading to outliers diagram
An example of genetic drift leading to outliers, featuring our friends the cat population. Top row: Two cat populations, one small (left; n = 5) and one large (middle, n = 12) show little genetic differentiation between them (right; each triangle represents a single gene or locus; the ‘colour’ gene is marked in green). The average (‘neutral’) pattern of differentiation is shown by the dashed line. Much like in our original example, one cat in the small population is horrifically struck by lightning and dies (RIP again). Now when we compare the frequency of the alleles of the two populations (bottom), we see that (because a green cat died), the ‘colour’ locus has shifted away from the general trend (right) and is now an outlier. Thus, genetic drift in the ‘colour’ gene gives the illusion of a selective loci (even though natural selection didn’t cause the change, since colour does not relate to how likely a cat is to be struck by lightning).

Secondly, the cut-off for a ‘significant’ vs. ‘relatively different but possibly not under selection’ can be a bit arbitrary; some genes that are under weak selection can go undetected. Furthermore, recent studies have shown a growing appreciation for polygenic adaptation, where tiny changes in allele frequencies of many different genes combine together to cause strong evolutionary changes. For example, despite the clear heritable nature of height (tall people often have tall children), there is no clear ‘height’ gene: instead, it appears that hundreds of genes are potentially very minor height contributors.

Polygenic height figure final
In this example, we have one tall parent (top) who produces two offspring; one who is tall (left) and one who isn’t (right). In order to understand what genetic factors are contributing to their height differences, we compare their genetics (right; each dot represents a single locus). Although there aren’t any particular loci that look massively different between the two, the cumulative effect of tiny differences (the green triangles) together make one person taller than the other. There are no clear outliers, but many (poly) different genes (genic) acting together.

Genotype-environment associations

To overcome these biases, sometimes we might take a more methodological approach called ‘genotype-environment association’. This analysis differs in that we select what we think our selective pressures are: often environmental characteristics such as rainfall, temperature, habitat type or altitude. We then take two types of measures per individual organism: the genotype, through DNA sequencing, and the relevant environmental values for that organisms’ location. We repeat this over the full distribution of the species, taking a good number of samples per population and making sure we capture the full variation in the environment. Then we perform a correlation-type analysis, which seeks to see if there’s a connection or trend between any particular alleles and any environmental variables. The most relevant variables are often pulled out of the environmental dataset and focused on to reduce noise in the data.

The main benefit of GEA over FST outlier tests is that it’s unlikely to be as strongly influenced by genetic drift. Unless (coincidentally) populations are drifting at the same genes in the same pattern as the environment, the analysis is unlikely to falsely pick it up. However, it can still be confounded by neutral population structure; if one population randomly has a lot of unique alleles or variation, and also occurs in a somewhat unique environment, it can bias the correlation. Furthermore, GEA is limited by the accuracy and relevance of the environmental variables chosen; if we pick only a few, or miss the most important ones for the species, we won’t be able to detect a large number of very relevant (and likely very selective) genes. This is a universal problem in model-based approaches and not just limited to GEA analysis.

New spells to find adaptive genes?

It seems likely that with increasing datasets and better analytical platforms, many more types of analysis will be developed to delve deeper into the adaptive aspects of the genome. With whole-genome sequencing starting to become a reality for non-model species, better annotation of current genomes and a steadily increasing database of functional genes, the ability of researchers to investigate evolution and adaptation at the genomic level is also increasing.

Pseudo or science? Interpreting scientific reports

Telling the real from the fake

The phrase ‘fake news’ seems to get thrown around ad nauseum these days, but there’s a reason for it (besides the original somewhat famous coining of the phrase). Inadvertently bad, or sometimes downright malicious, reporting of various apparent ‘trends’ or ‘patterns’ are rife throughout nearly all forms of media. Particularly, many entirely subjective or blatantly falsified presentations or reports of ‘fact’ cloud real scientific inquiry and its distillation into the broader community. In fact, a recent study has shown that falsified science spreads through social media at orders of magnitude faster than real science: so why is this? And how do we spot the real from the fake?

It’s imperative that we understand what real science entails to be able to separate it from the pseudoscience. Of course, scientific rigour and method are always of utmost importance, but these can be hard to detect (or can be effectively lied through colourful language choices). When reading a scientific article, whether it’s direct from the source (a journal, such as Nature or Science) or secondarily through a media outlet such as the news or online sources, there’s a few things that you should always look for that will help discern between the two categories.

Peer-review and adequate referencing

Firstly, is the science presented in an objective, logical manner? Does it systematically demonstrate the study system and question, with the relevant reference to peer-reviewed literature? Good science builds upon the wealth of previously done good science to contribute to a broader field of knowledge; in this way, critical observations and alternative ideas can be compared and contrasted to steer the broader field. Even entirely novel science, which go against the common consensus, will reference and build upon prior literature and justify the necessity and design of the study. Having written more than one literature review in my life, I can safely assure you that there is no shortage of relevant scientific studies that need to be read, understood and built upon in any future scientific study.

 

Methods, statistics and sampling

Secondly, is there a solid methodological basis for the science? In almost all cases this will include some kind of statistical measure for the validity (and accuracy) of the results. How does the sample size of the study measure up to what the target group? Remember, a study size of 500 people is definitely too small to infer the medical conditions of all humans, but rarely do we get sample sizes that big in evolutionary genetics studies (especially in non-model species). The sampling regime is extremely important for interpreting the results: particularly, keep in mind if there is an inherent bias in the way the sampling has been done. Are some groups more represented than others? Where do the samples come from? What other factors might be influencing the results, based on the origin of the samples?

Cat survey comic 2
Despite having a large sample size, and a significant result (p<0.05), this study cannot conclude that all dogs are awful. It can conclude, however, that cats are statistically significant assholes.

Presentation and language of findings

Thirdly, how does the source present the results? Does it make claims that seem beyond a feasible conclusion based on the study itself? Even if the underlying study is scientific, many secondary sources have a tendency to ‘sensationalise’ the results in order to make them both more appealing and more digestible to the general public. This is only exacerbated by the lack of information of the scientific method of the original paper, actual statistics, or the accurate summation of those statistics. Furthermore, a real scientific study will try to (in most cases) avoid evocative words such as ‘prove’, as a fundamental aspect of science is that no study is 100% ‘proven’ (see falsifiability below). Proofs are a relevant mathematical concept though, but these fall under a different category altogether.

Here’s an example: recently, an Australian mainstream media outlet (among many) shared a story about a ‘recent’ (six months old) study that found that second-born children are more likely to be criminals and first-born children have higher IQ. As you might expect, the original study does not imply that being born second will make you a sudden murderer nor will being the first born make you a prodigy. Instead, the authors suggest that there may be a link between differential parental investment/attention (between different age order children) as a potential mechanism. They ruled out, based on a wealth of statistics, the influence of alternative factors such as health or education (both in quality and quantity). Thus, there is a correlative (read: not causative) effect of age on these characteristics. If you directly interpreted the newscast (or read some of the misguided comments), you might think otherwise.

Falsifiability 

Fourthly, are the hypotheses in the study falsifiable? One of the foundations of the modern scientific method includes the requirement of any real scientific hypothesis to be falsifiable; that is, there must be a way to show evidence against that hypothesis. This can be difficult to evaluate, but is why some broad philosophical questions are considered ‘unscientific’. A classic example is the phrase “all swans are white”, which was apparently historically believed in Europe (where there are no black swans). This statement is technically falsifiable, since if one found a non-white swan it would ‘disprove’ the hypothesis. Lo and behold, Europeans arrive in Australia and find that, actually, some swans are black. The original statement was thus falsified.

Swan comic 2
Well, I’ll be damned falsified. Just pretend the swan is actually black: I don’t have enough ink to make it realistic…

The role of the peer: including you!

Peer-review is a critical aspect of scientific process, and despite some conspiracy-theory-esque remarks about the secret Big Science Society, it generally works. While independent people inevitably have their own personal biases and are naturally subjective to some degree (no matter how hard we may try to be objective), a larger number of well-informed, critical thinkers help to broaden the focus and perspective surrounding any scientific subject. Remember, nothing is more critical of science than science itself.

Peer review comic
One of the most apt representations of peer-review I’ve ever seen, from Dr. Nick D. Kim (PhD). Source: here.

While peer-review is technically aimed at other scientists as a way to steer and inform research, the input of outsider, non-specialist readers can still be informative. By closely looking at science, and better understanding both how it is done and what it is showing, can help us evaluate how valuable science is to broader society and shift scientific information into useful, everyday applications. Furthermore, by educating ourselves on what is real science, and what is disruptive drivel, we can aid the development of science and reduce the slowing impact of misinformation and deceit.