“Who Do You Think You Are?”: studying the evolutionary history of species

The constancy of evolution

Evolution is a constant, endless force which seeks to push and shape species based on the context of their environment: sometimes rapidly, sometimes much more gradually. Although we often think of discrete points of evolution (when one species becomes two, when a particular trait evolves), it is nevertheless a continual force that influences changes in species. These changes are often difficult to ‘unevolve’ and have a certain ‘evolutionary inertia’ to them; because of these factors, it’s often critical to understand how a history of evolution has generated the organisms we see today.

What do I mean when I say evolutionary history? Well, the term is fairly diverse and can relate to the evolution of particular traits or types of traits, or the genetic variation and changes related to these changes. The types of questions and points of interest of evolutionary history can depend at which end of the timescale we look at: recent evolutionary histories, and the genetics related to them, will tell us different information to very ancient evolutionary histories. Let’s hop into our symbolic DeLorean and take a look back in time, shall we?

Labelled_evolhistory
A timeslice of evolutionary history (a pseudo-phylogenetic tree, I guess?), going from more recent history (bottom left) to deeper history (top right). Each region denoted in the tree represents the generally area of focus for each of the following blog headings. 1: Recent evolutionary history might look at individual pedigrees, or comparing populations of a single species. 2: Slightly older comparisons might focus on how species have arisen, and the factors that drive this (part of ‘phylogeography’). 3: Deep history might focus on the origin of whole groups of organisms and a focus on the evolution of particular traits like venom or sociality.

Very recent evolutionary history: pedigrees and populations

While we might ordinarily consider ‘evolutionary history’ to refer to events that happened thousands or millions of years ago, it can still be informative to look at history just a few generations ago. This often involves looking at pedigrees, such as in breeding programs, and trying to see how very short term and rapid evolution may have occurred; this can even include investigating how a particular breeding program might accidentally be causing the species to evolve to adapt to captivity! Rarely does this get referred to as true evolutionary history, but it fits on the spectrum, so I’m going to count it. We might also look at how current populations are evolving differently to one another, to try and predict how they’ll evolve into the future (and thus determine which ones are most at risk, which ones have critically important genetic diversity, and the overall survivability of the total species). This is the basis of ‘evolutionarily significant units’ or ESUs which we previously discussed on The G-CAT.

Captivefishcomic
Maybe goldfish evolved 3 second memory to adapt to the sheer boringness of captivity? …I’m joking, of course: the memory thing is a myth and adaptation works over generations, not a lifetime.

A little further back: phylogeography and species

A little further back, we might start to look at how different populations have formed or changed in semi-recent history (usually looking at the effect of human impacts: we’re really good at screwing things up I’m sorry to say). This can include looking at how populations have (or have not) adapted to new pressures, how stable populations have been over time, or whether new populations are being ‘made’ by recent barriers. At this level of populations and some (or incipient) species, we can find the field of ‘phylogeography’, which involves the study of how historic climate and geography have shaped the evolution of species or caused new species to evolve.

Evolution of salinity
An example of trait-based phylogenetics, looking at the biogeographic patterns and evolution/migration to freshwater in perch-like fishes, by Chen et al. (2014). The phylogeny shows that a group of fishes adapted to freshwater environments (black) from a (likely) saltwater ancestor (white), with euryhaline tolerance evolving two separate times (grey).

One high profile example of phylogeographic studies is the ‘Out of Africa’ hypothesis and debate for the origination of the modern human species. Although there has been no shortage of debate about the origin of modern humans, as well as the fate of our fellow Neanderthals and Denisovans, the ‘Out of Africa’ hypothesis still appears to be the most supported scenario.

human phylogeo
A generalised diagram of the ‘Out of Africa’ hypothesis of human migration, from Oppenheimer, 2012. 

Phylogeography is also component for determining and understanding ‘biodiversity hotspots’; that is, regions which have generated high levels of species diversity and contain many endemic species and populations, such as tropical hotspots or remote temperate regions. These are naturally of very high conservation value and contribute a huge amount to Earth’s biodiversity, ecological functions and potential for us to study evolution in action.

Deep, deep history: phylogenetics and the origin of species (groups)

Even further back, we start to delve into the more traditional concept of evolutionary history. We start to look at how species have formed; what factors caused them to become new species, how stable the new species are, and what are the genetic components underlying the change. This subfield of evolution is called ‘phylogenetics’, and relates to understanding how species or groups of species have evolved and are related to one another.

Sometimes, this includes trying to look at how particular diagnostic traits have evolved in a certain group, like venom within snakes or eusocial groups in bees. Phylogenetic methods are even used to try and predict which species of plants might create compounds which are medically valuable (like aspirin)! Similarly, we can try and predict how invasive a pest species may be based on their phylogenetic (how closely related the species are) and physiological traits in order to safeguard against groups of organisms that are likely to run rampant in new environments. It’s important to understand how and why these traits have evolved to get a good understanding of exactly how the diversity of life on Earth came about.

evolution of venom
An example of looking at trait evolution with phylogenetics, focusing on the evolution of venom in snakes, from Reyes-Velasco et al. (2014). The size of the boxes demonstrates the number of species in each group, with the colours reflecting the number of venomous (red) vs. non-venomous (grey) species. The red dot shows the likely origin of venom.

Phylogenetics also allows us to determine which species are the most ‘evolutionarily unique’; all the special little creatures of plant Earth which represent their own unique types of species, such as the tuatara or the platypus. Naturally, understanding exactly how precious and unique these species are suggests we should focus our conservation attention and particularly conserve them, since there’s nothing else in the world that even comes close!

Who cares what happened in the past right? Well, I do, and you should too! Evolution forms an important component of any conservation management plan, since we obviously want to make sure our species can survive into the future (i.e. adapt to new stressors). Trying to maintain the most ‘evolvable’ groups, particularly within breeding programs, can often be difficult when we have to balance inbreeding depression (not having enough genetic diversity) with outbreeding depression (obscuring good genetic diversity by adding bad genetic diversity into the gene pool). Often, we can best avoid these by identifying which populations are evolutionarily different to one another (see ESUs) and using that as a basis, since outbreeding vs. inbreeding depression can be very difficult to measure. This all goes back to the concept of ‘adaptive potential’ that we’ve discussed a few times before.

In any case, a keen understanding of the evolutionary trajectory of a species is a crucial component for conservation management and to figure out the processes and outcomes of evolution in the real world. Thus, evolutionary history remains a key area of research for both conservation and evolution-related studies.

 

What’s the story with these little fish?

The pygmy perches

I’ve mentioned a few times in the past that my own research centres around a particular group of fish: the pygmy perches. When I tell people about them, sometimes I get the question “why do you want to study them?” And to be fair, it’s a good question: there must be something inherently interesting about them to be worth researching. And there is plenty.

Pygmy perches are a group of very small (usually 4-6cm) freshwater fish native to temperate Australia: they’re found throughout the southwest corner of WA and the southeast of Australia, stretching from the mouth of the Murray River in SA up to lower Queensland (predominantly throughout the Murray-Darling Basin) and even in northern Tasmania. There’s a massive space in the middle where they aren’t found: this is the Nullarbor Plain, and is a significant barrier for nearly all freshwater species (since it holds practically no water).

Unmack_distributions
The distributions of different pygmy perch species (excluding Bostockia porosa, which is a related but different group), taken from Unmack et al. (2011). The black region in the bottom right part indicates the Nullarbor Plain, which separates eastern and western species.

The group consists of 2 genera (Nannoperca and Nannatherina) and 7 currently described species, although there could be as many as 10 actual species (see ‘cryptic species’: I’ll elaborate on this more in future posts…). They’re very picky about their habitat, preferring to stay within low flow waterbodies with high vegetation cover, such as floodplains and lowland creeks. Most species have a lifespan of a couple years, with different breeding times depending on the species.

Why study pygmy perches?

So, they’re pretty cute little fish. But unfortunately, that’s not usually enough justification to study a particular organism. So, why does the Molecular Ecology Lab choose to use pygmy perch as one (of several) focal groups? Well, there’s a number of different reasons.

The main factors that contribute to their research interest are their other characteristics: because they’re so small and habitat specialists, they often form small, isolated populations that are naturally separated by higher flow rivers and environmental barriers. They also appear to have naturally very low genetic diversity: ordinarily, we’d expect that they wouldn’t be great at adapting and surviving over a long time. Yet, they’ve been here for a long time: so how do they do it? That’s the origin of many of the research questions for pygmy perches.

Adaptive evolution despite low genetic variation

One of the fundamental aspects of the genetic basis of evolution is the connection between genetic diversity and ‘adaptability’: we expect that populations or species with more genetic diversity are much more likely to be able to evolve and adapt to new selective pressures than those without it. Pygmy perches clearly contradict this at least a little bit, and so much of the research in the lab is about understanding exactly what factors and mechanisms contribute to the ability of pygmy perches to apparently adapt and survive what is traditionally not consider a very tolerant place to live. Recent research suggests the different expression of genes may be an important mechanism of adaptation for pygmy perch.

Recommended readings: Brauer et al. (2016); Brauer et al. (2017).

The influence of the historic environment on evolution

From an evolutionary standpoint, pygmy perches are unique in more ways than just their genetic diversity. They’re relatively ancient, with the origin of the group estimated at around 40 million years ago. Since then, they’ve diversified into a number of different species and have spread all over the southern half of the Australian continent, demonstrating multiple movements across Australia in that time. This pattern is unusual for freshwater organisms, and this combined with their ancient nature makes them ideal candidates for studying the influence of historic environment, climate and geology on the evolution and speciation of freshwater animals in Australia. And that’s the focus of my PhD (although not exclusively; plenty of other projects have explored questions in this area).

Bass Strait timelapse
The changing sea levels across the Bass Strait from A) 25 thousand years ago, B) 17.5 thousand years ago, and C) 14 thousand years ago (similar to today), from Lambeck and Chappel (2001). This is an example of one kind of environmental change that would likely have influenced the evolutionary patterns of pygmy perch, separating the populations from northern Tasmania and Victoria.

Recommended readings: Unmack et al. (2013); Unmack et al. (2011).

Conservation management and ecological role

Of course, it’s all well and good to study the natural, evolutionary history of an organism as if it hasn’t had any other influences. But we all know how dramatic the impact humans have on the environment are and unfortunately for many pygmy perch species this means that they are threatened or endangered and at risk of extinction. Their biggest threats are introduced predators (such as the redfin perch and European carp), alteration of waterways (predominantly for agriculture) and of course, climate change. For some populations, local extinction has already happened: some populations of the Yarra pygmy perch (N. obscura) are now completely gone from the wild. Many of these declines occurred during the Millennium Drought, where the aforementioned factors were exacerbated by extremely low water availability and consistently high temperatures. So naturally, a significant proportion of the work on pygmy perches is focused on their conservation, and trying to boost and recover declining populations.

This includes the formation of genetics-based breeding programs for two species, the southern pygmy perch and Yarra pygmy perch. A number of different organisations are involved in this ongoing process, including a couple of schools! These programs are informed by our other studies of pygmy perch evolution and adaptive potential and hopefully combined we can save these species from becoming totally extinct.

Yarra-breeders-vid.gif
Some of the Yarra pygmy perch from the extinct Murray-Darling Basin population, ready to make breeding groups!
Fin clipping Yarras.jpg
Me, fin clipping the Yarra pygmy perch in the breeding groups for later genetic analyses. Yes, I know, I needed a haircut.

Recommended readings: Brauer et al. (2013); Attard et al. (2016); Hammer et al. (2013).

Hopefully, some of this convinces you that pygmy perch are actually rather interesting creatures (I certainly think so!). Pygmy perch research can offer a unique insight into evolutionary history, historical biogeography, and conservation management. Also, they’re kinda cute….so that’s gotta count for something, right? If you wanted to find out more about pygmy perch research, and get updates on our findings, be sure to check out the Molecular Ecology Lab Facebook page or our website!

Bigger and better: the evolution of genomic markers

From genetic to genomic markers

As we discussed in last week’s post, different parts of the DNA can be used as genetic markers for analyses relating to conservation, ecology and evolution. We looked at a few different types of markers (allozymes, microsatellites, mitochondrial DNA) and why different markers are good for different things. This week, we’ll focus on the much grander and more modern state of genomics; that is, using DNA markers that are often thousands of genes big!

Genomics vs genetics
If we pretended that the size of the text for each marker was indicative of how big the data is, this figure would probably be about a 1000x under-estimation of genomic datasets. There is not enough room on the blog page to actually capture this.

I briefly mentioned last week that the development of genomics was largely facilitated by what we call ‘next-generation sequencing’, which allows us to easily obtain billions of fragments of DNA and collate them into a useful dataset. Most genomic technologies differ based on how they fragment the DNA for sequencing and how the data is processed.

While the analytical, monetary and time cost of obtaining genomic data has decreased as sequencing technology has improved, we still need to balance these factors together when deciding which method to use. Many methods allow us to put many individual samples together in the same reaction (we tell which sequence belongs to which sequence using special ‘barcode sequences’ that code for one specific sample): in this case, we also need to consider how many samples to place together (“multiplex”).

As a broad generalisation, we can separate most genomic sequencing methods into two broad categories: whole genome or reduced-representation. As the name suggests, whole genome sequencing involves collecting the entire genome of the individuals we use, although this is generally very expensive and can only be done with a limited number of samples at a time. If we want to have a much larger dataset, often we’ll use reduced-representation methods: these involve breaking down the whole genome into much smaller fragments and as many of these as we can to get a broad overview of the genome. Reduced-representation methods are much cheaper and are appropriate for larger sample sizes than whole genome, but naturally lose large amounts of information from the genome.

Genomic sequencing pathway
The (very, very) vague outline of genomic sequencing. First we take all of the DNA of an organism, breaking it into smaller fragments in this case using a restriction enzyme (see below). We then amplify these fragments, making billions of copies of them before piecing them back together to either make the entire genome (left) of a few individuals or patches of the genome (right) for more individuals.

Restriction-site associated DNA (RADseq)

Within the Molecular Ecology Lab, we predominantly use a technology known as “double digest restriction site-associated DNA sequencing”, which is a huge mouthful so we just call it ‘ddRAD’. This sounds incredibly complicated, but (as far as sequencing methods go, anyway) is actually relatively simple. We take the genome of a sample, and then using particular enzymes (called ‘restriction enzymes’), we break the genome randomly down into small fragments (usually up to 200 bases long, after we filter it). We then attach a specific barcode for that individual, and a few more bits and pieces as part of the sequencing process, and then pool them together. This pool (a “library”) is sent off to a facility to be run through a sequencing machine and produce the data we work with. The ‘dd’ part of ‘ddRAD’ just means that a pair of restriction enzymes are used in this method, instead of just one (it’s a lot cleaner and more efficient).

ddRAD flowchart
A simplified standard ddRAD protocol. 1) We obtain the DNA-containing tissue of the organism we want to study, such as blood, skin or muscle samples. 2) We extract all of the genomic DNA from the tissue sample, making sure we have good quantity and quality (avoiding degradation if possible). 3) We break the genome down into smaller fragments using restriction enzymes, which cut at certain places (orange and green marks on the top line). We then attach special sequences to these fragments, such as the adapter (needed for the sequencer to work) and the barcode for that specific individual organism (the green bar). 4) We amplify the fragments, generating billions of copies of each of them. 5) We send these off to a sequencing facility to read the DNA sequence of these fragments (often outsourced to a private institution). 6) We get back a massive file containing all of the different sequences for all of the organisms in one file. 7) We separate out these sequences into the individual the came from by using their special barcode as an identifier (the coloured codes). 8) We then process this data to make sure it’s of the best quality possible, including removing sequences that we don’t have enough copies of or have errors. From this, we produce a final dataset, often with one continuous sequence for each individual. If this dataset doesn’t meet our standards for quality or quantity, we go back and try new filtering parameters.

Gene expression and transcriptomics

Sometimes, however, we might not even want to look at the exact DNA sequence. You might remember in an earlier blog post that I mentioned genes can be ‘switched on’ or ‘switched off’ by activator or repressor proteins. Well, because of this, we can have the exact same genes act in different ways depending on the environment. This is most observable in tissue development: although all of the cells of all of your organs have the exact same genome, the control of gene expression changes what genes are active and thus the physiology of the organ. We might also have genes which are only active in an organism under certain conditions, like heat shock proteins under hot conditions.

This can be an important part of evolution as being able to easily change genetic expression may allow an individual to adapt to new environmental pressures much more easily; we call this ‘phenotypic plasticity’. In this case, instead of sequencing the DNA, we might want to look at which genes are expressed, or how much they are expressed, in different conditions or populations: this is called ‘comparative transcriptomics’. So instead of sequencing the DNA, we sequence the RNA of an organism (the middle step of making proteins, so most RNAs are only present if the gene is being expressed).

Processing data

Despite how it must appear, most of the work with genomic datasets actually comes after you get the sequences back. Because of the nature and scale of genomic datasets, rigorous analytical pipelines are needed to manage and filter data from the billions of small sequences into full sequences of high quality. There are many different ways to do this, and usually involves playing with parameters, so I won’t delve into the details (although some of it is explained in the boxed part of the flowchart figure).

The future of genomics

No doubt as the technology improves, whole genome sequencing will become progressively more feasible for more species, opening up the doors for a new avalanche of data and possibilities. In any case, we’ve come a long way since the first whole genome (for Haemophilus influenzae) in 1995 and the construction of the whole human genome in 2003.

 

Using the ‘blueprint of life’: an introduction to DNA markers

What is a ‘molecular marker’?

As we’ve previously discussed within The G-CAT, information from the DNA of organisms can be used in a variety of ways to study evolution and ecology, inform conservation management, and understand the diversity of life on Earth. We’ve also had a look at the general background of the DNA itself, and some of the different parts of the genome. What we haven’t discussed yet is how we use the DNA sequence in these studies; most importantly, which part of the genome to use.

The genome of most organisms is massive. The size of the genome ranges depending on the organism, with one of the smallest recorded genomes belonging to a bacteria (Carsonella ruddi), consisting of 160,000 bases. There is a bit of debate about the largest recorded genome, but one contender (the ‘canopy plant’, Paris japonica) has a genome stretching 150 billion base pairs long! The human genome sits in the middle at around 3 billion bases long. Naturally, it would be incredibly difficult to obtain the sequence of the whole genome of many organisms (particularly 20 – 30 years ago, due to technological limitations in the sequencing process) so instead we usually pick a specific region of the genome instead. The exact region (or type of region) we use is referred to as a ‘molecular marker’.

How do we choose a good marker?

The marker we pick is incredibly important: this is often based on how much variation we need to observe across groups. For example, if we want to study differences between individuals, say in a pedigree analysis, we need to pick a section of the DNA that will show differences between individuals; it will need to mutate fairly rapidly to be useful. If it mutates too slowly, all individuals will look identical genetically and we won’t have learnt anything new at all.

On the flipside, if we want to study evolution at a larger scale (say, between species, or groups of species) we would need to use a marker that evolves much slower. Using a rapidly mutating section of DNA would effectively give a tonne of ‘white noise’; it’d be impossible to pick what is the genetic difference at the species level (i.e. one species is different to another at that base) vs. at the individual level (i.e. one or many individuals within the species are different). Thus, we tend to use much slower mutating markers for deeper evolutionary history.

Evol spectrum
The spectrum of evolutionary history, with evolutionary splits between major animal groups on the left, to splits between species in the middle, to splits between individuals within a family tree on the right. The effectiveness of a marker for a particular part of the spectrum depends on its mutation rate. The original figure was taken from a landmark paper by Avise (1994), considered one of the forefathers of molecular ecology.

Think of it like comparing cats and dogs. If we wanted to compare different cats to one another (say different breeds) we could use hair length or coat colour as a useful trait. Since some breeds have different coat characteristics, and these don’t vary as much within the breed as across breeds, we can easily determine a long haired cat from a short haired cat. However, if we tried to use coat colour and length to compare cats and dogs we’d be stumped, because both species have lots of variation in these traits within their species. Some cats have coat length more similar to some dogs than to other cats for example; so they’re not a good characteristics to separate the two animal species (we might use muzzle shape, or body shape instead). If we substitute each of these traits with a particular marker, then we can see that some markers are better for some comparisons but not good for others.

Allozymes

The most traditional molecular marker are referred to as ‘allozymes’; instead of comparing actual genetic sequences (something that was not readily possible early in the field), variations in the shape (i.e. the amino acids of the protein, not the code underlying it) were compared between species. Changes in proteins occur very rarely as natural selection tends to push against randomly changing protein structure, since the shape of it is critical to its function and functionality. Because of this, allozymes were only really effective for studying very broad comparisons (mainly across species or species groups); the exact protein used depends on the study organism. Allozymes are generally considered outdated in the field nowadays.

With the development of technologies that allowed us to actual determine the DNA code of genes, molecular ecology moved into comparing actual sequences across individuals. However, early sequencing technology could generally only accurately determine small sections of DNA at a time, so particular markers capitalising on this were developed. Many of these are still used due to their cost-effectiveness and general ease of analysing.

Microsatellites

For comparing closely related individuals (within a pedigree, or a population), markers called ‘microsatellites’ are widely used. These are small sections of the genome which have repetitive DNA codes; usually, the same two or three base pairs (one ‘motif’) are repeated a number of times afterwards (the ‘repeat number’). While the motifs themselves rarely get mutations, the number of repeated motifs very rapidly mutates. This is because the protein that copies DNA is not very perfect, and often ‘slips up’, and adds or cuts off a repeat from the microsatellite sequence. Thus, differences in the repeat number of microsatellites accumulate pretty quickly, to the point where you can determine the parents of an individual with them.

Microsat_diagram
The general (and simplified) structure of a microsatellite marker. 

Microsatellites are often used in comparisons across closely related individuals, such as within pedigrees or within populations. While they are relatively easy to obtain, one drawback is that you need to have some understanding of the exact microsatellite you wish to analyse before you start; you need to make a specific ‘primer’ sequence to be able to get the right marker, as some may not be informative in particular species or comparisons. Many researchers choose to use 10-20 different microsatellite markers together in these types of studies, such as in human parentage analyses.

Cats_parentage
Microsatellites are useful for parentage analysis. Our previous guest contestants are here to discuss ‘Who is the father?!’ in Maury-like fashion. The results are in, and using 4 microsatellites (1-4) and looking at the number of repeats in each of those, we can see the contestant 2 is undoubtedly the father! I’ll be honest, I have no idea if this is how Maury works, but I think it would work.

Mitochondrial DNA

For deeper comparisons, however, microsatellites mutate far too rapidly to be effective. Instead, we can choose to use the DNA of the mitochondria. You may remember the mitochondria as ‘the powerhouse of the cell’; while this is true, it also has a lot of other unique properties. The mitochondria was actually (a very, very, very long time ago) a separate bacteria-like organism which became symbiotically embedded within another cell. Because of this, and despite a couple billion years of evolution since that time, the mitochondria actually has its own genome separate to the ‘host’ (like the standard human genome). The full mitochondrial genome consists of around 37 different genes, most of which don’t code for any proteins involved directly in evolution; as such, natural selection doesn’t affect them as much as other genes. The most commonly used mitochondrial genes are the cytochrome b oxidase gene (cytb for short) or the cytochrome c oxidase 1 (CO1) gene.

The mitochondrial genome evolves relatively rapidly (but not nearly as fast as microsatellites) and is found in pretty much every plant and animal on the planet. Because of these traits, it’s often used as a way of diagnosing species through the ‘Barcode of Life’ project (using cytb and CO1). It’s very widely used within species-level studies, to the point where we can even use the relatively consistent mutation rate of the mitochondrial genome to estimate how long ago different species separated in evolution.

Cats_barcode
Not entirely how the Barcode of Life works, but close enough, right?

Other markers?

There are plenty of other genetic markers that are used within molecular ecology, with some focusing on only the exons or introns of genes, or other repetitive sequences. However, microsatellites and mitochondrial genes are among the most widely used in evolution and conservation studies.

While these markers have been very useful in building the foundations of molecular ecology as a scientific field, developments in sequencing technology, analytical methods and evolutionary theory have pushed our ability to use DNA to understand evolution and conservation even further. Particularly the development of sequencing machines which can process much larger amounts of genetic DNA. This has pushed genetics into the age of ‘genomics’: while this sounds like a massively technical difference, it’s really just about the difference in the size of the data we can use. Obviously, this has many other benefits for the kinds of questions we can ask about evolution, conservation and ecology.

Genomics has massively expanded in recent years, the types, quantity and quality of data are diverse. Stay tuned because next week, we’ll start to delve into the modern world of genomics!

Playing around with science

Science in pop culture

For most people, scientific research can seem somewhat distant and detached from the average person (and society generally). However, the distillation of scientific ideas into various forms of media has been done for ages, and is particularly prevalent (although not limited to) within science fiction. It’s not all that uncommon for scientists to describe the origination of their scientific interest to have come from classic sci-fi movies, tv shows, or games. I’m not saying dinosaurs haven’t always been cool, but after seeing them animated and ferocious in Jurassic Park, I have no doubt a new generation of palaeontologists were inspired to enter the field. I’m sure the same must also be at least partially true for archaeology and Indiana Jones. While I can guarantee the actual scientific research is nowhere near as adventurous and high-octane thriller as those movies would depict, their respective popularities renew interest in the science and inspire new students of the disciplines.

Velociraptor
Sure, they’re not perfectly scientifically accurate, but the certainly get the attention of the public. Source: Jurassic Park wiki.

The inclusion of science within pop culture media such as movies, tv shows, music and video games can have profound impacts on the overall perception of science. This influence seems to go either way depending on how the science is presented and perceived: positive outlooks on science can succinctly present scientific matter in a way that is easy to interpret, and thus can generate interest in the fields of science. Contrastingly, negative outlooks on science, or misinterpretations of science, can drastically impact what people understand about scientific theory. For example, despite being a horrendously outdated belief, Lucy proposed that the average human only uses 10% of their brain capacity: achieving 100% brain capacity using a stimulant, the titular character becomes miraculously superhuman. While this concept is clearly outrageously behind the times for anyone who follows psychological sciences, a disturbing number of people apparently still believe this notion. Thus, misrepresentation of scientific theory perpetuates outdated concepts.

10% brain comic
I mean, someone may as well, right?

Don’t get me wrong: I love ridiculous science fiction as much as the next nerd, and I’m certainly not of the expectation that all science-based information needs to be 100% accurate, without fail (after all, the fiction and fantasy has to fit somewhere…). But it’s important to make sure the transition from scientific research to popular media doesn’t lose the important facts along the way.

Evolution’s relationship with pop culture has been a little more complicated than other scientific theories. Sometimes it’s invoked rather loosely to explain supernatural alien monsters (e.g. Xenomorphs; Alien franchise); other times it’s flipped on its head to show a type of de-evolution (Planet of the Apes). Science fiction has long recognised the innovative and seemingly endless possibilities of evolution and the formation of new species. Generally, the audience is fairly familiar with the concept of evolution (at least in principle) and it makes for a useful tool for explaining the myriad of life in science fiction stories.

Evolution in video games?

It probably doesn’t come as a huge surprise to note that I’m a nerd in all aspects of my life, not just my career. For me, this is particularly a love of video games. Rarely, however, do these two forms of nerdism coincide for me: while some games apply science and scientific theory, they are usually biased towards physics and engineering disciplines (looking at you, Portal). As far as my field is concerned, there are a few notable examples (such as Spore) which encapsulate the essence and majesty of evolution, but rarely do they incorporate the ‘genetic’ aspect that I love.

Spore screenshot
There’s nothing quite like making a horrific carnivorous monster and collapsing ecosystems by exterminating all of the wildlife, then taking over the Universe. Hmm…

You can then imagine my utter delight at the discovery of a game that actually incorporates both population genetics and interesting gameplay. The indie survival game, aptly named Niche: A Genetics Survival Game, very literally represents this ‘niche’ for me (and I will not apologise for the pun!). Combining simplified models of population genetics processes such as genetic diversity, inbreeding (and associated inbreeding depression), natural selection, and stochastic events, Niche beautifully incorporates scientific theory (albeit toned down to a layman level) with challenging, yet engaging, gameplay mechanics and adorable art style.

Niche screenshot
Niche: A Genetics Survival Game epitomises the intersection of evolutionary theory and pop culture.

As one might expect from the title, Niche is at heart a survival game: the aim is to have your very own population of animals (dubbed ‘Nichelings’) survive the stresses of the world, through balancing population size, gene pools, resources (such as food, nests, space) and fighting off predators. Over time, the genetics component drives the evolution of your Nichelings, pushing them to be better at certain tasks depending on the traits selected for: the ultimate aim of the game is to create the perfectly adapted species that can colonise all of the land masses randomly generated.

Niche screenshot DNA
The user interface of Niche. A: The ranking of the selected Nicheling, moving from alpha, to beta, to gamma. This determines the order the Nichelings eat in (gammas get the short end of the stick). B: The traits of the selecting Nicheling. In order, these are the physical traits (i.e. the strength, speed and abilities of the animal), the genetic sequence (genotype) of the animal (expanded in C). the user-chosen mutations for that Nicheling and the pedigree of NichelingsC: The expanded DNA sequence of the selected Nicheling, showing the paternal and maternal variants (alleles) of all the possible genes. Highlighted traits are the expressed trait (dominant) whilst the faded ones indicate recessive carrier genes that aren’t expressed. D: Collected food, one of the most important resources in the game. E: Nest material, required to build nests and produce offspring. F: The different senses (sight, smell, hearing) which can be toggled to give different viewpoints of the surrounding environments (with different benefits and weaknesses).

Niche requires cunning strategy, good foresight and planning, and sometimes a little luck. Although I’m decidedly not very good at Niche yet (I think my rates of extinction would mirror the real world a little too much for my liking…), the chance to involve my scientific background into my favourite hobby is a somewhat magical experience.

Niche screenshot extinct
Oh god, I hope this isn’t a premonition for my career!

You might wonder why I care so much about a video game. While the game is in and of itself an interesting concept, to me it exemplifies one way we can make science an enjoyable and digestible concept for non-scientists. It’s possible that Niche could open the door of population-level genetics and evolution to a new audience, and potentially inspire the next generation of scientists in the field. Although that might be an extraordinarily long shot, it is my hope that the curiosity, mystery and creativity of scientific research is at least partially represented in media such as gaming to help integrate science and society.

Using video games for science?!

Both science and society can benefit from the (accurate) representation of science in pop culture, not just through fostering a connection between scientific theory and the recreational hobbies of people. In rare occasions, pop culture can even be used as a surrogate medium for testing scientific theories and hypotheses in a specific environment: for example, World of Warcraft has unwittingly contributed to scientific progress. As part of a particular boss battle, characters could become infected with a particular disease (called “Corrupted Blood”), which would have significant effects on players but only for a few seconds. While this was supposed to be removed after leaving the area of the fight, a bug in the game caused it to stay on animal pets that were afflicted, and thus become a viral phenomenon when it started to spread into the wider world (of Warcraft). The presence of the epidemic wiped out swathes of lower level players and caused significant social repercussions in the World of Warcraft community as players adjusted their behaviour to avoid or prevent transmission of the deadly disease.

This unique circumstance allowed a group of scientists to use it as a simulation of a real viral outbreak, as the spread of the disease was directly related to the social behaviour and interactivity of players within the game. The “Corrupted Blood” incident such enthralled scientists that multiple papers were published discussing the feasibility of using virtual gaming worlds to simulate human reactions to epidemic outbreaks and viral transmissions on an unparalleled scale. Similarities between the method of transmission and behavioural responses to real-world events such as the avian flu epidemic were made.

Corrupted blood event
And you thought Bird Flu was bad, at least they couldn’t teleport! Source: GameRant.

This isn’t the only example of even World of Warcraft informing research, with others using it to model economic theories through a free market auction system. While these may seem extraordinarily strange (to scientists and non-scientists alike), these examples demonstrate how popular media such as gaming can be an important interactive front between science and society.