Changing the (water)course of history

The structure of a river system

For anyone who has had to study geography at some point in their education, you’d likely be familiar with the idea of river courses drawn on a map. They’re so important, in fact, that they are often the delimiting factor in the edges of countries, states or other political units. Water is a fundamental requirement of all forms of life and the riverways that scatter the globe underpin the maintenance, structure and accumulation of a large swathe of biodiversity.

So, what is a river?

Continue reading

Conservation pets: connecting with nature

An Ode to Jessie

Earlier in the year, I had made a comment that, as part of the natural evolution of this blog, I would try to change up the writing format every now and then to something a little more personal, emotional and potentially derivative from science. I must confess that this is one of those weeks, as it’s been an emotional rollercoaster for me. So, sorry in advance for the potentially self-oriented, reflective nature of this piece.

Continue reading

UnConservation Genetics: tools for managing invasive species

Conservation genetics

Naturally, all species play their role in the balancing and functioning of ecosystems across the globe (even the ones we might not like all that much, personally). Persistence or extinction of ecologically important species is a critical component of the overall health and stability of an ecosystem, and thus our aim as conservation scientists is to attempt to use whatever tools we have at our disposal to conserve species. One of the most central themes in conservation ecology (and to The G-CAT, of course) is the notion that genetic information can be used to better our conservation management approaches. This usually involves understanding the genetic history and identity of our target threatened species from which we can best plan for their future. This can take the form of genetic-informed relatedness estimates for breeding programs; identifying important populations and those at risk of local extinction; or identifying evolutionarily-important new species which might hold unique adaptations that could allow them to persist in an ever-changing future.

Applications of conservation genetics.jpg
Just a few applications of genetic information in conservation management, such as in breeding programs and pedigrees (left), identifying new/cryptic species (centre) and identifying and maintaining populations and their structure (right).

The Invaders

Contrastingly, sometimes we might also use genetic information to do the exact opposite. While so many species on Earth are at risk (or have already passed over the precipice) of extinction, some have gone rogue with our intervention. These are, of course, invasive species; pests that have been introduced into new environments and, by their prolific nature, start to throw out the balance of the ecosystem. Australians will be familiar with no shortage of relevant invasive species; the most notable of which is the cane toad, Rhinella marina. However, there are a plethora of invasive species which range from notably prolific (such as the cane toad) to the seemingly mundane (such as the blackbird): so how can we possibly deal with the number and propensity of pests?

Table of invasive species in Australia
A table of some of the most prolific mammalian invasive species in Australia, including when they were first introduced and why, and their (relatively) recently estimated population sizes. Source: Wikipedia (and studies referenced therein). Some estimated numbers might not reflect current sizes as they were obtained from studies over the last 10 years.

Tools for invasive species management

There are a number of tools at our disposal for dealing with invasive species. These range from chemical controls (like pesticides), to biological controls and more recently to targeted genetic methods. Let’s take a quick foray into some of these different methods and their applications to pest control.

Types of control tools for invasive species
Some of the broad categories of invasive species control. For any given pest species, such as the cane toad (top), we might choose to use a particular set of methods to reduce their numbers. These can include biological controls (such as the ladybird, for aphid populations (left)); chemical controls such as pesticides; or even genetic engineering technologies.

Biological controls

One of the most traditional methods of pest control are biological controls. A biological control is, in simple terms, a species that can be introduced to an afflicted area to control the population of an invasive species. Usually, this is based on some form of natural co-evolution or hierarchy: species which naturally predate upon, infect or otherwise displace the pest in question are preferred. The basis of this choice is that nature, and evolution by natural selection, often creates a near-perfect machine adapted for handling the exact problem.

Biological controls can have very mixed results. In some cases, they can be relatively effective, such as the introduction of the moth Cactoblastis cactorum into Australia to control the invasive prickly pear. The moth lays eggs exclusively within the tissue of the prickly pear, and the resultant caterpillars ravish the plant. There has been no association of secondary diet items for caterpillars, suggesting the control method has been very selective and precise.

Moth biological control flow chart
The broad life cycle of the cactus moth and how it controls the invasive prickly pear in Australia. The ravenous caterpillar larvae of the moth is effective at decimating prickly pears, whilst the moth’s specificity to this host means there is limited impact on other plant species.

On the contrary, bad biological controls can lead to ecological disasters. As mentioned above, the introduction of the cane toad into Australia has been widely regarded as the origin of one of the worst invasive pests in the nation’s history. Initially, cane toads were brought over in the 1930s to predate on the (native) cane beetle, which was causing significant damage to sugar cane plantations in the tropical north. Not overly effective at actually dealing with the problem they were supposed to deal with, the cane toad rapidly spread across northern portion of the continent. Native species that attempt to predate on the cane toad often die to their defensive toxin, causing massive ecological damage to the system.

The potential secondary impact of biological controls, and the degree of unpredictability in how they will respond to a new environment (and how native species will also respond to their introduction) leads conservationists to develop new, more specific techniques. In similar ways, viral and bacterial-based controls have had limited success (although are still often proposed in conservation management, such as the planned carp herpesvirus release).

Genetic controls?

It is clear that more targeted and narrow techniques are required to effectively control pest species. At a more micro level, individual genes could be used to manage species: this is not the first way genetic modification has been proposed to deal with problem organisms. Genetic methods have been employed for years in crop farming through genetic engineering of genes to produce ‘natural’ pesticides or insecticides. In a similar vein, it has been proposed that genetic modification could be a useful tool for dealing with invasive pests and their native victims.

Gene drives

One promising targeted, genetic-based method that has shown great promise is the gene drive. Following some of the theory behind genetic engineering, gene drives are targeted suites of genes (or alleles) which, by their own selfish nature, propagate through a population at a much higher rate than other alternative genes. In conjunction with other DNA modification methods, which can create fatal or sterilising genetic variants, gene drives present the opportunity to allow the natural breeding of an invasive species to spread the detrimental modified gene.

Gene drive diagram
An example of how gene drives are being proposed to tackle malaria. In this figure, the pink mosquito at the top has been genetically engineered using CRISPR to possess two important genetic elements: a genetic variant which causes the mosquito to be unable to produce eggs or bite (the pink gene), and a linked selfish genetic element (the gene drive itself; the plus) which makes this detrimental allele spread more rapidly than by standard inheritance. Sources: Nature and The Australian Academy of Science.

Although a relatively new, and untested, technique, gene drive technology has already been proposed as a method to address some of the prolific invasive mammals of New Zealand. Naturally, there are a number of limitations and reservations for the method; similar to biological control, there is concern for secondary impact on other species that interact with the invasive host. Hybridisation between invasive and native species would cause the gene drive to be spread to native species, counteracting the conservation efforts to save natives. For example, a gene drive could not reasonably be proposed to deal with feral wild dogs in Australia without massively impacting the ‘native’ dingo.

Genes for non-genetic methods

Genetic information, more broadly, can also be useful for pest species management without necessarily directly feeding into genetic engineering methods. The various population genetic methods that we’ve explored over a number of different posts can also be applied in informing management. For example, understanding how populations are structured, and the sizes and demographic histories of these populations, may help us to predict how they will respond in the future and best focus our efforts where they are most effective. By including analysis of their adaptive history and responses, we may start to unravel exactly what makes a species a good invader and how to best predict future susceptibility of an environment to invasion.

Table of genetic information applications
A comprehensive table of the different ways genetic information could be applied in broader invasive species management programs, from Rollins et al. (2006). This paper specifically relates to pest management within Western Australia but the concepts listed here apply broadly. Many of these concepts we have discussed previously in a conservation management context as well.

The better we understand invasive species and populations from a genetic perspective, the more informed our management efforts can be and the more likely we are to be able to adequately address the problem.

Managing invasive pest species

The impact of human settlement into new environments is exponentially beyond our direct influences. With our arrival, particularly in the last few hundred years, human migration has been an effective conduit for the spread of ecologically-disastrous species which undermine the health and stability of ecosystems around the globe. As such, it is our responsibility to Earth to attempt to address our problems: new genetic techniques is but one growing avenue by which we might be able to remove these invasive pests.

The human race(s)? Perspectives from genetics

The genetic testing of race

In one form or another, you may have been (unfortunately) exposed to the notion of ‘testing for someone’s race using genetics.’ In one sense, this is part of the motivation and platform of ‘23andMe’, which maps the genetic variants across the human genome back to likely origin populations to determine the relative ancestry of a person. In a much darker sense, the connection between genetic identity and race is the basis of eugenics, by suggesting genetic “purity” (this concept is utter nonsense, for reference) of a population as justification for some racist hierarchy. Typically, this is associated with Hitler’s Nazism, but more subversive versions of this association still exist in the world: for Australian readers, most notably when the far-right conservative minor party One Nation suggested that people claiming to be Indigenous should be subjected to genetic testing to verify their race.

DNA Ancestry map.jpg
A simplified overview of how DNA Ancestry methods work, by associating particular genetic variants within your genome to likely regions of origin. Note the geographic imprecision in the method on the map on the right, as well as the clear gaps. Source: Ancestry blog.

The biological concept of a ‘race’

Beyond the apparent ethical and moral objections to the invasive nature of demanding genetic testing for Indigenous peoples, a crucial question is one of feasibility: even if you decided to genetically test for race, is this possible? It might come as a surprise to non-geneticists that actually, from a genetic perspective, race is not a particularly stable concept.

The notion of races based on genetics has been a highly controversial topic throughout the development of genetic theory and research. Even recently, James Watson (as in of Watson & Crick, who were credited with the discovery of the structure of DNA) was stripped of several titles (including Chancellor Emeritus) following some controversial (and scientifically invalid) comments on the nature of race, genetics and intelligence. Comfortingly, the vast majority of the scientific community opposed his viewpoints on the matter, and in fact it has long been held that a ‘genetic race’ is not a scientifically stable concept.

James Watson.jpg
James Watson himself. I bet Rosalind Franklin never said anything like this… Source: Wikipedia.

You might ask: why is that? There are perceivable differences in the various peoples of the world, surely some of those could be related to both a ‘race’ and a ‘genetic identity’, right? Well, the issue is primarily due to the lack of identifiability of genetic variants that can be associated with a race. Decades of research in genetic variation across the global human population indicates that, due to the massive size of the human population and levels of genetic variation, it is functionally impossible to pinpoint down genetic variants that uniquely identify a ‘race’. Human genetic variation is such a beautiful spectrum of alleles that it becomes impossible to reliably determine where one end of the spectrum ends or begins, or to identify a strict number of ‘races’ within the kaleidoscope of the human genome.

How does this relate to 23AndMe?

How does this relate to your ‘23AndMe’ results? Well, chances are that some genetic variants might be able to be traced back to a particular region (e.g. Europe, somewhere). But naturally, there’s a significant number of limitations to this kind of inference; notably, that we don’t have reliable references from ancient history to draw upon very often. This, combined with the fact that humans have mixed among ourselves (and even with other species) for millennia, means that tracing back individual alleles is exceedingly difficult.

Genetic variation and non-identifiability of race figure
A diagram of exactly why identifying a genetic basis for race is impossible in humans. A) The ‘idealised’ version of race; people are easily classified by their genetic identity, with some variation within each classification (in this case, race) but still distinctiveness between them. B) The reality of human genetic variation, which makes it exceedingly difficult to make any robust or solid boundaries between groups of people due to the sheer amount of variation. Source: Harvard University blog.

This is exponentially difficult for people who might have fewer sequenced ancestors or relatives; without the reference for genetic variation, it can be even harder to trace their genetic ancestry. Such is the case for Indigenous Australians, for which there is a distinct lack of available genetic data (especially compared to European-descended Australians).

The non-genetic components

The genetic non-identifiability of race is but one aspect which contradicts the rationality of genetic race testing. As we discussed in the previous post on The G-CAT, the connection between genetic underpinning and physicality is not always clear or linear. The role of the environment on both the expression of genetic variation, as well as the general influence of environment on aspects such as behaviour, philosophy, and culture necessitate that more than the genome contributes to a person’s identity. For any given person, how they express and identify themselves is often more strongly associated with their non-genetic traits such as beliefs and culture.

genetic vs cultural inheritance.jpg
A comparison of genetic vs. cultural inheritance, which demonstrates (as an example) how other factors (in this case, other people) influence the passing on of cultural traits. Remember that this but one aspect of the factors that determine culture and identity, and equally (probably more) complex networks exist for other influences such as environment and development. Source: Creanza et al. (2017), PNAS.

These factors cannot reliably be tested under a genetic framework. While there may be some influence of genes on how a person’s psychology develops, it is unlikely to be able to predict the lifestyle, culture and complete identity of said person. For Indigenous Australians, this has been confounded by the corruption and disruption of their identity through the Stolen Generation. As a result, many Indigenous descendants may not appear (from a genetic point of view) to be purely Indigenous but their identity and culture as an Indigenous person is valid. To suggest that their genetic ancestry more strongly determines their identity than anything else is not only naïve from a scientific perspective, but nothing short of a horrific simplification and degradation of those seeking to reclaim their identity and culture.

The non-identifiability of genetic race

The science of genetics overwhelmingly suggests that there is no fundamental genetic underpinning of ‘race’ that can be reliably used. Furthermore, the impact of non-genetic factors on determining the more important aspects of personal identity, such as culture, tradition and beliefs, demonstrates that attempts to delineate people into subcategories by genetic identity is an unreliable method. Instead, genetic research and biological history fully acknowledges and embraces the diversity of the global human population. As it stands, the phrase ‘human race’ might be the most biologically-sound classification of people: we are all the same.

Crossing the Wires: why ‘genetic hardwiring’ is not the whole story

The age-old folly of ‘nature vs. nurture’

It should come as no surprise to any reader of The G-CAT that I’m a firm believer against the false dichotomy (and yes, I really do love that phrase) of “nature versus nurture.” Primarily, this is because the phrase gives the impression of some kind of counteracting balance between intrinsic (i.e. usually genetic) and extrinsic (i.e. usually environmental) factors and how they play a role in behaviour, ecology and evolution. While both are undoubtedly critical for adaptation by natural selection, posing this as a black-and-white split removes the possibility of interactive traits.

We know readily that fitness, the measure by which adaptation or maladaptation can be quantified, is the product of both the adaptive value of a certain trait and the environmental conditions said trait occurs in. A trait that might confer strong fitness in white environment may be very, very unfit in another. A classic example is fur colour in mammals: in a snowy environment, a white coat provides camouflage for predators and prey alike; in a rainforest environment, it’s like wearing one of those fluoro-coloured safety vests construction workers wear.

Genetics and environment interactions figure.jpg
The real Circle of Life. Not only do genes and the environment interact with one another, but genes may interact with other genes and environments may be complex and multi-faceted.

Genetically-encoded traits

In the “nature versus nurture” context, the ‘nature’ traits are often inherently assumed to be genetic. This is because genetic traits are intrinsic as a fundamental aspect of life, inheritable (and thus can be passed on and undergo evolution by natural selection) and define the important physiological traits that provide (or prevent) adaptation. Of course, not all of the genome encodes phenotypic traits at all, and even less relate to diagnosable and relevant traits for natural selection to act upon. In addition, there is a bit of an assumption that many physiological or behavioural traits are ‘hardwired’: that is, despite any influence of environment, genes will always produce a certain phenotype.

Adaptation from genetic variation.jpg
A very simplified example of adaptation from genetic variation. In this example, we have two different alleles of a single gene (orange and blue). Natural selection favours the blue allele so over time it increases in frequency. The difference between these two alleles is at least one base pair of DNA sequence; this often arises by mutation processes.

Despite how important the underlying genes are for the formation of proteins and definition of physiology, they are not omnipotent in that regard. In fact, many other factors can influence how genetic traits relate to phenotypic traits: we’ve discussed a number of these in minor detail previously. An example includes interactions across different genes: these can be due to physiological traits encoded by the cumulative presence and nature of many loci (as in quantitative trait loci and polygenic adaptation). Alternatively, one gene may translate to multiple different physiological characters if it shows pleiotropy.

Differential expression

One non-direct way genetic information can impact on the phenotype of an organism is through something we’ve briefly discussed before known as differential expression. This is based on the notion that different environmental pressures may affect the expression (that is, how a gene is translated into a protein) in alternative ways. This is a fundamental underpinning of what we call phenotypic plasticity: the concept that despite having the exact same (or very similar) genes and alleles, two clonal individuals can vary in different traits. The is related to the example of genetically-identical twins which are not necessarily physically identical; this could be due to environmental constraints on growth, behaviour or personality.

Brauer DE figure_cropped
An example of differential expression in wild populations of southern pygmy perch, courtesy of Brauer et al. (2017). In this figure, each column represents a single individual fish, with the phylogenetic tree and coloured boxes at the top indicating the different populations. Each row represents a different gene (this is a subset of 50 from a much larger dataset). The colour of each cell indicates whether the expression of that gene is expressed more (red) or less (blue) than average. As you can see, the different populations can clearly be seen within their expression profiles, with certain genes expressing more or less in certain populations.

From an evolutionary perspective, the ability to translate a single gene into multiple phenotypic traits has a strong advantage. It allows adaptation to new, novel environments without waiting for natural selection to favour adaptive mutations (or for new, adaptive alleles to become available from new mutation events). This might be a fundamental trait that determines which species can become invasive pests, for instance: the ability to establish and thrive in environments very different to their native habitat allows introduced species to quickly proliferate and spread. Even for species which we might not consider ‘invasive’ (i.e. they have naturally spread to new environments), phenotypic plasticity might allow them to very rapidly adapt and evolve into new ecological niches and could even underpin the early stages of the speciation process.

Epigenetics

Related to this alternative expression of genes is another relatively recent concept: that of epigenetics. In epigenetics, the expression and function of genes is controlled by chemical additions to the DNA which can make gene expression easier or more difficult, effectively promoting or silencing genes. Generally, the specific chemicals that are attached to the DNA are relatively (but not always) predictable in their effects: for example, the addition of a methyl group to the sequence is generally associated with the repression of the gene underlying it. How and where these epigenetic markers may in turn be affected by environmental conditions, creating a direct conduit between environmental (‘nurture’) and intrinsic genetic (‘nature’) aspects of evolution.

Epigenetic_mechanisms.jpg
A diagram of different epigenetic factors and the mechanisms by which they control gene expression. Source: Wikipedia.

Typically, these epigenetic ‘marks’ (chemical additions to the DNA) are erased and reset during fertilisation: the epigenetic marks on the parental gametes are removed, and new marks are made on the fertilised embryo. However, it has been shown that this removal process is not 100% effective, and in fact some marks are clearly passed down from parent to offspring. This means that these marks are heritable, and could allow them to evolve similarly to full DNA mutations.

The discovery of epigenetic markers and their influence on gene expression has opened up the possibility of understanding heritable traits which don’t appear to be clearly determined by genetics alone. For example, research into epigenetics suggest that heritable major depressive disorder (MDD) may be controlled by the expression of genes, rather than from specific alleles or genetic variants themselves. This is likely true for a number of traits for which the association to genotype is not entirely clear.

Epigenetic adaptation?

From an evolutionary standpoint again, epigenetics can similarly influence the ‘bang for a buck’ of particular genes. Being able to translate a single gene into many different forms, and for this to be linked to environmental conditions, allows organisms to adapt to a variety of new circumstances without the need for specific adaptive genes to be available. Following this logic, epigenetic variation might be critically important for species with naturally (or unnaturally) low genetic diversity to adapt into the future and survive in an ever-changing world. Thus, epigenetic information might paint a more optimistic outlook for the future: although genetic variation is, without a doubt, one of the most fundamental aspects of adaptability, even horrendously genetically depleted populations and species might still be able to be saved with the right epigenetic diversity.

Epigenetic cats example
A relatively simplified example of adaptation from epigenetic variation. In this example, we have a species of cat; the ‘default’ cat has non-tufted ears and an orange coat. These two traits are controlled by the expression of Genes A and B, respectively: in the top cat, neither gene is expressed. However, when this cat is placed into different environments, the different genes are “switched on” by epigenetic factors (the green markers). In a rainforest environment, the dark foliage makes darker coat colour more adaptive; switching on Gene B allows this to happen. Conversely, in a desert environment switching on Gene A causes the cat to develop tufts on its ears, which makes it more effective at hunting prey hiding in the sands. Note that in both circumstances, the underlying genetic sequence (indicated by the colours in the DNA) is identical: only the expression of those genes change.

 

Epigenetic research, especially from an ecological/evolutionary perspective, is a very new field. Our understanding of how epigenetic factors translate into adaptability, the relative performance of epigenetic vs. genetic diversity in driving adaptability, and how limited heritability plays a role in adaptation is currently limited. As with many avenues of research, further studies in different contexts, experiments and scopes will reveal further this exciting new aspect of evolutionary and conservation genetics. In short: watch this space! And remember, ‘nature is nurture’ (and vice versa)!

When “getting it wrong” is ‘right’

The nature of science

Over the course of the (relatively brief) history of this blog, I’ve covered a number of varied topics. Many of these have been challenging to write about – either because they are technically-inclined and thus require significant effort to distill down to sensibility and without jargon; or because they address personal issues related to mental health or artistic expression. But despite the nature of these posts, this week’s blog has proven to be one of the most difficult to write, largely because it demands a level of personal vulnerability, acceptance of personality flaws and a potentially self-deprecating message. Alas, I find myself unable to ignore my own perceived importance of the topic.

It should come as no surprise to any reader, whether scientifically trained or not, that the expectation of scientific research is one of total objectivity, clarity and accuracy. Scientific research that is seen not to meet determined quotas of these aspects is undoubtedly labelled ‘bad science’. Naturally, of course, we aim to maximise the value of our research by addressing these as best as can be conceivably possible. Therein, however, lies the limitation: we cannot ever truly be totally objective, nor clear, nor accurate with research, and acceptance and discussion of the limitations of research is a vital aspect of any paper.

The imperfections of science

The basic underpinning of this disjunction lies with the people that conduct the science. Because while the scientific method has been developed and improved over centuries to be as objective, factual and robust as possible, the underlying researchers will always be plagued to some degree by subjectivism. Whether we consciously mean to or not, our prior beliefs, perceptions and history influence the way we conduct or perceive science (hopefully, only to a minor extent).

Inherent biases figure
How the different aspects of ourselves can influence our research. The scientific method directly addresses the more objective aspects (highlighted in green arrows), but other subjective concepts may cause bias. Ideally, however, the objective parts outweigh the subjective ones (indicated by the size of the arrows), and is helped by peer-review as a process.

 

Additionally, one of the drawbacks of being mortal is that we are prone to making mistakes. Biology is never perfect, and the particularly complex tasks and ideas we assign ourselves to research inevitably involve some level of incorrectness. But while that may seem to fundamentally contradict the nature of science, I argue that is in fact not just a reality of scientific research, but also a necessity for progression.

Impostor syndrome

One widely realised manifestation of this disjunction between idealistic science and practical science, and one particularly felt by researchers in training such as post-graduate students, is referred to as ‘impostor syndrome’. This involves the sometimes subversive (and sometimes more overtly) feeling of inadequacy when we compare ourselves to a wider crowd. It is the feeling of not belonging in a particular social or professional group due to a lack of experience, talent or other ‘right’ characteristics. This is particularly pervasive in postgraduate students as we inevitably interact and compare ourselves to those we aspire to be like – postdoctoral researchers, professors, or other more established researchers – who are naturally more experienced in the field. The jarring disjunction of our own capability, often inaccurately assumed to be a proxy of intelligence, leads many to feel incapable or inadequate to be a ‘real’ scientist.

imposter syndrome.jpg
I’d explain impostor syndrome as “feeling like being three kids stacked in a lab coat instead of a ‘real scientist’.”

It cannot be overstated that impostor syndrome is often the result of mental health issues and a high-pressure, demanding academic system, and rarely a rational perception. In many cases, we see only the best aspects of scientific research (both for academic students and the general public), a rose-coloured view of process. What we don’t see, however, is the series of failures and missteps that have led to even the best of scientific outcomes, and may assume that they didn’t happen. This is absolutely false.

Analysis paralysis

Other tangible impacts of impostor syndrome and self-induced perfectionism is the suppression of progressive work. By this I mean the typical ‘procrastinating’ behaviour that comes about from perfectionism: that we often prevent ourselves from moving forward if we perceive that there might be (however minor) issues with our work. Within science, this often involves inane amounts of reading and preparing on how to run an analysis without actually running anything. This is what has been called ‘analysis paralysis’, and disguises inactivity under the pretence that the student is still learning the ropes.

The reality is that trying to predict the multitude of factors and problems one can run into when conducting an analysis is a monolithic task. Some aspects relevant to a particular dataset or analysis are unlikely to be discussed or clearly referenced in the literature, and thus difficult to anticipate. Problem solving is often more effective as a reactive, rather than proactive, measure by allowing researchers to respond to an issue when it arises instead of getting bogged down in the astronomical realm of “things that could possibly go wrong.”

Drawing on personal experience, this has led to literal months of reading and preparing data for running models only to have the first dozens of attempts not run or run incorrectly due to something as trivial as formatting. The lesson learnt is that I should have just tried to run the analysis early, stuffed it all up, and learnt from the mistakes with a little problem solving. No matter how much reading I did, or ever could do, some of these mistakes would never have been able to be explicitly predicted a priori.

analysis error messages collage.jpg
Sometimes it feels like analysis is 90% “why didn’t this work?!” I think that’s realistic, though.

Why failure is conducive to better research

While we should always strive to be as accurate and objective as possible, sometimes this can be counteractive to our own learning progress. The rabbit holes of “things that could possibly go wrong” run very, very deep and if you fall down them, you’ll surely end up in a bizarre world full of odd distractions, leaps of logic and insanity. Under this circumstance, I suggest allowing yourself to get it wrong: although repeated failures are undoubtedly damaging to the ego and confidence, giving ourselves the opportunity to make mistakes and grow from them ultimately allows us to become more productive and educated than if we avoided them altogether.

Alice in Wonderland analogy
“We’re all mad here.”

Speaking at least from a personal anecdote (although my story appears corroborated with other students’ experiences), some level of failure is critical to the learning process and important for scientific development generally. Although cliché, “learning from our mistakes” is inevitably one of the most effective and quickest ways to learn and allowing ourselves to be imperfect, a little inaccurate or at time foolish is conducive to better science.

Allow yourself to stuff things up. You’ll do it way less in the future if you do.

Pressing Ctrl-Z on Life with De-extinction

Note: For some clear, interesting presentations on the topic of de-extinction, and where some of the information for this post comes from, check out this list of TED talks.

The current conservation crisis

The stark reality of conservation in the modern era epitomises the crisis discipline that so often is used to describe it: species are disappearing at an unprecedented rate, and despite our best efforts it appears that they will continue to do so. The magnitude and complexity of our impacts on the environment effectively decimates entire ecosystems (and indeed, the entire biosphere). It is thus our responsibility as ‘custodians of the planet’ (although if I had a choice, I would have sacked us as CEOs of this whole business) to attempt to prevent further extinction of our planet’s biodiversity.

Human CEO example
“….shit.”

If you’re even remotely familiar with this blog, then you would have been exposed to a number of different techniques, practices and outcomes of conservation research and its disparate sub-disciplines (e.g. population genetics, community ecology, etc.). Given the limited resources available to conserve an overwhelming number of endangered species, we attempt to prioritise our efforts towards those most in need, although there is a strong taxonomic bias underpinning them.

At least from a genetic perspective, this sometimes involves trying to understand the nature and potential of adaptation from genetic variation (as a predictor of future adaptability). Or using genetic information to inform captive breeding programs, to allow us to boost population numbers with minimal risk of inbreeding depression. Or perhaps allowing us to describe new, unidentified species which require their own set of targeted management recommendations and political legislation.

Genetic rescue

Yet another example of the use of genetics in conservation management, and one that we have previously discussed on The G-CAT, is the concept of ‘genetic rescue’. This involves actively adding new genetic material from other populations into our captive breeding programs to supplement the amount of genetic variation available for future (or even current) adaptation. While there traditionally has been some debate about the risk of outbreeding depression, genetic rescue has been shown to be an effective method for prolonging the survival of at-risk populations.

super-gene-genetic-rescue-e1549973268851.jpg
How my overactive imagination pictures ‘genetic rescue’.

There’s one catch (well, a few really) with genetic rescue: namely, that one must have other populations to ‘outbreed’ with in order add genetic variation to the captive population. But what happens if we’re too late? What if there are no other populations to supplement with, or those other populations are also too genetically depauperate to use for genetic rescue?

Believe it or not, sometimes it’s not too late to save species, even after they have gone extinct. Which brings us from this (lengthy) introduction to this week’s topic: de-extinction. Yes, we’re literally (okay, maybe not) going to raise the dead.

Necroconservaticon
Your textbook guide to de-extinction. Now banned in 47 countries.

Backbreeding: resurrection by hybridisation

You might wonder how (or even if!) this is possible. And to be frank, it’s extraordinarily difficult. However, it has to a degree been done before, in very specific circumstances. One scenario is based on breeding out a species back into existence: sometimes we refer to this as ‘backbreeding’.

This practice really only applies in a few select scenarios. One requirement for backbreeding to be possible is that hybridisation across species has to have occurred in the past, and generally to a substantial scale. This is important as it allows the genetic variation which defines one of those species to live on within the genome of its sister species even when the original ‘host’ species goes extinct. That might make absolutely zero sense as it stands, so let’s dive into this with a case study.

I’m sure you’ll recognise (at the very least, in name) these handsome fellows below: the Galápagos tortoise. They were a pinnacle in Charles Darwin’s research into the process of evolution by natural selection, and can live for so long that until recently there had been living individuals which would have been able to remember him (assuming, you know, memory loss is not a thing in tortoises. I can’t even remember what I had for dinner two days ago, to be fair). As remarkable as they are, Galápagos tortoises actually comprise 15 different species, which can be primarily determined by the shape of their shells and the islands they inhabit.

Galapagos island and tortoises
A map of the Galápagos archipelago and tortoise species, with extinct species indicated by symbology. Lonesome George was the last known living member of the Pinta Island tortoise, C. abingdonii for reference. Source: Wikipedia.

One of these species, Chelonoidis elephantopus, also known as the Floreana tortoise after their home island, went extinct over 150 years ago, likely due to hunting and tradeHowever, before they all died, some individuals were transported to another island (ironically, likely by mariners) and did the dirty with another species of tortoise: C. becki. Because of this, some of the genetic material of the extinct Floreana tortoise introgressed into the genome of the still-living C. becki. In an effort to restore an iconic species, scientists from a number of institutions attempted to do what sounds like science-fiction: breed the extinct tortoise back to life.

By carefully managing and selectively breeding captive individuals , progressive future generations of the captive population can gradually include more and more of the original extinct C. elephantopus genetic sequence within their genomes. While a 100% resurrection might not be fully possible, by the end of the process individuals with progressively higher proportion of the original Floreana tortoise genome will be born. Although maybe not a perfect replica, this ‘revived’ species is much more likely to serve a similar ecological role to the now-extinct species, and thus contribute to ecosystem stability. To this day, this is one of the closest attempts at reviving a long-dead species.

Is full de-extinction possible?

When you saw the title for this post, you were probably expecting some Jurassic Park level ‘dinosaurs walking on Earth again’ information. I know I did when I first heard the term de-extinction. Unfortunately, contemporary de-extinction practices are not that far advanced just yet, although there have been some solid attempts. Experiments conducted using the genomic DNA from the nucleus of a dead animal, and cloning it within the egg of another living member of that species has effectively cloned an animal back from the dead. This method, however, is currently limited to animals that have died recently, as the DNA degrades beyond use over time.

The same methods have been attempted for some extinct animals, which went extinct relatively recently. Experiments involving the Pyrenean ibex (bucardo) were successful in generating an embryo, but not sustaining a living organism. The bucardo died 10 minutes after birth due to a critical lung condition, as an example.

The challenges and ethics of de-extinction

One might expect that as genomic technologies improve, particularly methods facilitated by the genome-editing allowed from CRISPR/Cas-9 development, that we might one day be able to truly resurrect an extinct species. But this leads to very strongly debated topics of ethics and morality of de-extinction. If we can bring a species back from the dead, should we? What are the unexpected impacts of its revival? How will we prevent history from repeating itself, and the species simply going back extinct? In a rapidly changing world, how can we account for the differences in environment between when the species was alive and now?

Deextinction via necromancy figure
The Chaotic Neutral (?) approach to de-extinction.

There is no clear, simple answer to many of these questions. We are only scratching the surface of the possibility of de-extinction, and I expect that this debate will only accelerate with the research. One thing remains eternally true, though: it is still the distinct responsibility of humanity to prevent more extinctions in the future. Handling the growing climate change problem and the collapse of ecosystems remains a top priority for conservation science, and without a solution there will be no stable planet on which to de-extinct species.

de-extinction meme
You bet we’re gonna make a meme months after it’s gone out of popularity.

The ‘other’ allele frequency: applications of the site frequency spectrum

The site-frequency spectrum

In order to simplify our absolutely massive genomic datasets down to something more computationally feasible for modelling techniques, we often reduce it to some form of summary statistic. These are various aspects of the genomic data that can summarise the variation or distribution of alleles within the dataset without requiring the entire genetic sequence of all of our samples.

One very effective summary statistic that we might choose to use is the site-frequency spectrum (aka the allele frequency spectrum). Not to be confused with other measures of allele frequency which we’ve discussed before (like Fst), the site-frequency spectrum (abbreviated to SFS) is essentially a histogram of how frequent certain alleles are within our dataset. To do this, the SFS classifies each allele into a certain category based on how common it is, tallying up the number of alleles that occur at that frequency. The total number of categories would be the maximum number of possible alleles: for organisms with two copies of every chromosome (‘diploids’, including humans), this means that there are double the number of samples included. For example, a dataset comprised of genomic sequence for 5 people would have 10 different frequency bins.

For one population

The SFS for a single population – called the 1-dimensional SFS – this is very easy to visualise as a concept. In essence, it’s just a frequency distribution of all the alleles within our dataset. Generally, the distribution follows an exponential shape, with many more rare (e.g. ‘singletons’) alleles than there are common ones. However, the exact shape of the SFS is determined by the history of the population, and like other analyses under coalescent theory we can use our understanding of the interaction between demographic history and current genetic variation to study past events.

1DSFS example.jpg
An example of the 1DSFS for a single population, taken from a real dataset from my PhD. Left: the full site-frequency spectrum, counting how many alleles (y-axis) occur a certain number of times (categories of the x-axis) within the population. In this example, as in most species, the vast majority of our DNA sequence is non-variable (frequency = 0). Given the huge disparity in number of non-variable sites, we often select on the variable ones (and even then, often discard the 1 category to remove potential sequencing errors) and get a graph more like the right. Right: the ‘realistic’ 1DSFS for the population, showing a general exponential decline (the blue trendline) for the more frequent classes. This is pretty standard for an SFS. ‘Singleton’ and ‘doubleton’ are alternative names for ‘alleles which occur once’ and ‘alleles which occur twice’ in an SFS.

Expanding the SFS to multiple populations

Further to this, we can expand the site-frequency spectrum to compare across populations. Instead of having a simple 1-dimensional frequency distribution, for a pair of populations we can have a grid. This grid specifies how often a particular allele occurs at a certain frequency in Population A and at a different frequency in Population B. This can also be visualised quite easily, albeit as a heatmap instead. We refer to this as the 2-dimensional SFS (2DSFS).

2dsfs example
An example of a 2DSFS, also taken from my PhD research. In this example, we are comparing Population A, containing 5 individuals (as diploid, 2 x 5 = max. of 10 occurrences of an allele) with Population B, containing 4 individuals. Each row denotes the frequency at which a certain allele occurs in Population whilst the columns indicate the frequency a certain allele occurs in Population A. Each cell therefore indicates the number of alleles that occur at the exact frequency of the corresponding row and column. For example, the first cell (highlighted in green) indicates the number of alleles which are not found in either Population A or Population B (this dataset is a subsample from a larger one). The yellow cell indicates the number of alleles which occur 4 times in Population and also 4 times in Population A. This could mean that in one of those Populations 4 individuals have one copy of that allele each, or two individuals have two copies of that allele, or that one has two copies and two have one copy. The exact composition of how the alleles are spread across samples within each population doesn’t matter to the overall SFS.

The same concept can be expanded to even more populations, although this gets harder to represent visually. Essentially, we end up with a set of different matrices which describe the frequency of certain alleles across all of our populations, merging them together into the joint SFS. For example, a joint SFS of 4 populations would consist of 6 (4 x 4 total comparisons – 4 self-comparisons, then halved to remove duplicate comparisons) 2D SFSs all combined together. To make sense of this, check out the diagrammatic tables below.

populations for jsfs
A summary of the different combinations of 2DSFSs that make up a joint SFS matrix. In this example we have 4 different populations (as described in the above text). Red cells denote comparisons between a population and itself – which is effectively redundant. Green cells contain the actual 2D comparisons that would be used to build the joint SFS: the blue cells show the same comparisons but in mirrored order, and are thus redundant as well.
annotated jsfs heatmap
Expanding the above jSFS matrix to the actual data, this matrix demonstrates how the matrix is actually a collection of multiple 2DSFSs. In this matrix, one particular cell demonstrates the number of alleles which occur at frequency x in one population and frequency y in another. For example, if we took the cell in the third row from the top and the fourth column from the left, we would be looking at the number of alleles which occur twice in Population B and three times in Population A. The colour of this cell is moreorless orange, indicating that ~50 alleles occur at this combination of frequencies. As you may notice, many population pairs show similar patterns, except for the Population C vs Population D comparison.

The different forms of the SFS

Which alleles we choose to use within our SFS is particularly important. If we don’t have a lot of information about the genomics or evolutionary history of our study species, we might choose to use the minor allele frequency (MAF). Given that SNPs tend to be biallelic, for any given locus we could have Allele A or Allele B. The MAF chooses the least frequent of these two within the dataset and uses that in the summary SFS: since the other allele’s frequency would just be 2N – the frequency of the other allele, it’s not included in the summary. An SFS made of the MAF is also referred to as the folded SFS.

Alternatively, if we know some things about the genetic history of our study species, we might be able to divide Allele A and Allele B into derived or ancestral alleles. Since SNPs often occur as mutations at a single site in the DNA, one allele at the given site is the new mutation (the derived allele) whilst the other is the ‘original’ (the ancestral allele). Typically, we would use the derived allele frequency to construct the SFS, since under coalescent theory we’re trying to simulate that mutation event. An SFS made of the derived alleles only is also referred to as the unfolded SFS.

Applications of the SFS

How can we use the SFS? Well, it can moreorless be used as a summary of genetic variation for many types of coalescent-based analyses. This means we can make inferences of demographic history (see here for more detailed explanation of that) without simulating large and complex genetic sequences and instead use the SFS. Comparing our observed SFS to a simulated scenario of a bottleneck and comparing the expected SFS allows us to estimate the likelihood of that scenario.

For example, we would predict that under a scenario of a recent genetic bottleneck in a population that alleles which are rare in the population will be disproportionately lost due to genetic drift. Because of this, the overall shape of the SFS will shift to the right dramatically, leaving a clear genetic signal of the bottleneck. This works under the same theoretical background as coalescent tests for bottlenecks.

SFS shift from bottleneck example.jpg
A representative example of how a bottleneck causes a shift in the SFS, based on a figure from a previous post on the coalescentCentre: the diagram of alleles through time, with rarer variants (yellow and navy) being lost during the bottleneck but more common variants surviving (red). Left: this trend is reflected in the coalescent trees for these alleles, with red crosses indicating the complete loss of that allele. Right: the SFS from before (in red) and after (in blue) the bottleneck event for the alleles depicted. Before the bottleneck, variants are spread in the usual exponential shape: afterwards, however, a disproportionate loss of the rarer variants causes the distribution to flatten. Typically, the SFS would be built from more alleles than shown here, and extend much further.

Contrastingly, a large or growing population will have a larger number of rare (i.e. unique) alleles from the sudden growth and increase in genetic variation. Thus, opposite to the bottleneck the SFS distribution will be biased towards the left end of the spectrum, with an excess of low-frequency variants.

SFS shift from expansion example.jpg
A similar diagram as above, but this time with an expansion event rather than a bottleneck. The expansion of the population, and subsequent increase in Ne, facilitates the mutation of new alleles from genetic drift (or reduced loss of alleles from drift), causing more new (and thus rare) alleles to appear. This is shown by both the coalescent tree (left) and a shift in the SFS (right).

The SFS can even be used to detect alleles under natural selection. For strongly selected parts of the genome, alleles should occur at either high (if positively selected) or low (if negatively selected) frequency, with a deficit of more intermediate frequencies.

Adding to the analytical toolbox

The SFS is just one of many tools we can use to investigate the demographic history of populations and species. Using a combination of genomic technologies, coalescent theory and more robust analytical methods, the SFS appears to be poised to tackle more nuanced and complex questions of the evolutionary history of life on Earth.

Mr. Gorbachev, tear down this (pay)wall

The dreaded paywall

For anyone who absorbs their news and media through the Internet (hello, welcome to the 21st Century), you would undoubtedly be familiar with a few frustrating and disingenuous aspects of media such as clickbait headlines and targeted advertising. Another one that might aggravate the common reader is Ol’ Reliable, the paywall – blocking access to an article unless some volume of money is transferred to the publisher, usually through a subscription basis. You might argue that this is a necessary evil, or that rewarding well-written pieces and informative journalism through monetary means might lead to the free market starving poor media (extremely optimistically). Or you might argue that the paywall is morally corrupt and greedy, and just another way to extort money out of hapless readers.

Paywalls.jpg
Yes, that is a literal paywall. And no, I don’t do subtlety.

Accessibility in science

I’m loathe to tell that you that even science, the powerhouse of objectivity with peer-review to increase accountability, is stifled by the weight of corporate greed. You may notice this from some big name journals, like Nature and Science – articles cost money to access, either at the individual level (e.g. per individual article, or as a subscription for a single person for a year) or for an entire institution (such as universities). To state that these paywalls are exorbitantly priced would be a tremendous understatement – for reference, an institution subscription to the single journal Nature (one of 2,512 journals listed under the conglomerate of Springer Nature) costs nearly $8,000 per year. A download of a single paper often costs around $30 for a curious reader.

Some myths about the publishing process

You might be under the impression, as above, that this money goes towards developing good science and providing a support network for sharing and distributing scientific research. I wish you were right. In his book ‘The Effective Scientist’, Professor Corey Bradshaw describes the academic publishing process as “knowledge slavery”, and no matter how long I spend thinking about this blog post would I ever come up with a more macabre yet apt description. And while I highly recommend his book for a number of reasons, his summary and interpretation of how publishing in science actually works (both the strengths and pitfalls) is highly informative and representative.

There are a number of different aspects about publishing in science that make it so toxic to researchers. For example, the entirety of the funds acquired from the publishing process goes to the publishing institution – none of it goes to the scientists that performed and wrote the work, none to the scientists who reviewed and critiqued the paper prior to publication, and none to the institutions who provided the resources to develop the science. In fact, the perception is that if you publish science in a journal, especially high-ranking ones, it should be an honour just to have your paper in that journal. You got into Nature – what more do you want?

Publishing cycle.jpg
The alleged cycle of science. You do Good Science; said Good Science gets published in an equally Good Journal; the associated pay increase (not from the paper itself, of course, but by increasing success rates of grant applications and collaborations) helps to fund the next round of Good Science and the cost of publishing in a Good Journal. Unfortunately, and critically, the first step into the cycle (the yellow arrow) is remarkably difficult and acts as a barrier to many researchers (many of whom do Very Good Science).

Open Access journals

Thankfully, some journals exist which publish science without the paywall: we refer to these as ‘Open Access’ (OA) journals. Although the increased accessibility is undoubtedly a benefit for the spread of scientific knowledge, the reduced revenue often means that a successful submission comes with an associated cost. This cost is usually presented as an ‘article processing charge’: for a paper in a semi-decent journal, this can be upwards of thousands of dollars for a single paper. Submitting to an OA journal can be a bit of a delicate balance: the increased exposure, transparency and freedom to disseminate research is a definite positive for scientists, but the exorbitant costs that can be associated with OA journals can preclude less productive or financially robust labs from publishing in them (regardless of the quality of science produced).

Open access logo.png
The logo for Open Access journals, originally designed by PLoS.

Manuscripts and ArXives

There is somewhat of a counter culture to the rigorous tyranny of scientific journals: some sites exist where scientists can freely upload their manuscripts and articles without a paywall or submission cost. Naturally, the publishing industry reviles this and many of these are not strictly legal (since you effectively hand over almost all publishing rights to the journal at submission). The most notable of these is Sci-Hub, which uses various techniques (including shifting through different domain names in different countries) to bypass paywalls.

Other more user-generated options exist, such as the different subcategories of ArXiv, where users can upload their own manuscripts free of charge and without a paywall and predominantly prior to the peer-review process. By being publically uploaded, ArXiv sites allow scientists to broaden the peer-review process beyond a few journal-selected reviewers. There is still some screening process when submitting to ArXiv to filter out non-scientific articles, but the overall method is much more transparent and scientist-friendly than a typical publishing corporation. For articles that have already been published, other sites such as Researchgate often act as conduits for sharing research (either those obscured by paywalls, despite copyright issues, or those freely accessible by open access).

You might also have heard through the grapevine that “scientists are allowed to send you PDFs of their research if you email them.” This is a bit of a dubious copyright loophole: often, this is not strictly within the acceptable domain of publishing rights as the journal that has published this research maintains all copyrights to the work (clever). Out of protest, many scientists may send their research to interested parties, often with the caveat of not sharing it anywhere else or in manuscript form (as opposed to the finalised published article). Regardless, scientists are more than eager to share their research however they can.

Summary table.jpg
A summary of some of the benefits and detriments of each journal type. For articles published in pre-print sites there is still the intention of (at some date) publishing the article under one of the other two official journal models (and thereof are not mutually exclusive).

Civil rights and access to science

There are a number of both empirical and philosophical reasons why free access to science is critically important for all people. At least one of these (among many others) is based on your civil rights. Scientific research is incredibly expensive, and is often funded through a number of grants from various sources, among the most significant of which includes government-funded programs such as the Australian Research Council (ARC).

Where does this money come from? Well, indirectly, you (if you pay your taxes, anyway). While this connection can be at times frustrating for scientists – particularly if there is difficulty in communicating the importance of your research due to a lack of or not-readily-transparent commercial, technological or medical impact of the work – the logic applies to access to scientific data and results, too. As someone who has contributed monetarily to the formation and presentation of scientific work, it is your capitalist right to have access to the results of that work. Although privatisation ultimately overpowers this in the publishing world, there is (in my opinion) a strong moral philosophy behind demanding access to the results of the research you have helped to fund.

Walled off from research

Anyone who has attempted to publish in the scientific literature is undoubtedly keenly aware of the overt corruption and inadequacy of the system. Private businesses hold a monopoly on the dissemination of scientific research, and although science tries to overcome this process, it is a pervasive structure. However, some changes are in process which are seeking to re-invent the way we handle the publishing of scientific research and with strong support from the general public there is opportunity to minimise the damage that private publication businesses proliferate.

Two Worlds: contrasting Australia’s temperate regions

Temperate Australia

Australia is renowned for its unique diversity of species, and likewise for the diversity of ecosystems across the island continent. Although many would typically associate Australia with the golden sandy beaches, palm trees and warm weather of the tropical east coast, other ecosystems also hold both beautiful and interesting characteristics. Even the regions that might typically seem the dullest – the temperate zones in the southern portion of the continent – themselves hold unique stories of the bizarre and wonderful environmental history of Australia.

The two temperate zones

Within Australia, the temperate zone is actually separated into two very distinct and separate regions. In the far south-western corner of the continent is the southwest Western Australia temperate zone, which spans a significant portion. In the southern eastern corner, the unnamed temperate zone spans from the region surrounding Adelaide at its westernmost point, expanding to the east and encompassing Tasmanian and Victoria before shifting northward into NSW. This temperate zones gradually develops into the sub-tropical and tropical climates of more northern latitudes in Queensland and across to Darwin.

 

Labelled Koppen-Geiger map
The climatic classification (Koppen-Geiger) of Australia’s ecosystems, derived from the Atlas of Living Australia. The light blue region highlights the temperate zones discussed here, with an isolated region in the SW and the broader region of the SE as it transitions into subtropical and tropical climates northward.

The divide separating these two regions might be familiar to some readers – the Nullarbor Plain. Not just a particularly good location for fossils and mineral ores, the Nullarbor Plain is an almost perfectly flat arid expanse that stretches from the western edge of South Australia to the temperate zone of the southwest. As the name suggests, the plain is totally devoid of any significant forestry, owing to the lack of available water on the surface. This plain is a relatively ancient geological structure, and finished forming somewhere between 14 and 16 million years ago when tectonic uplift pushed a large limestone block upwards to the surface of the crust, forming an effective drain for standing water with the aridification of the continent. Thus, despite being relatively similar bioclimatically, the two temperate zones of Australia have been disconnected for ages and boast very different histories and biota.

Elevation map of NP.jpg
A map of elevation across the Australian continent, also derived from the Atlas of Living Australia. The dashed black line roughly outlines the extent of the Nullarbor Plain, a massively flat arid expanse.

The hotspot of the southwest

The southwest temperate zone – commonly referred to as southwest Western Australia (SWWA) – is an island-like bioregion. Isolated from the rest of the temperate Australia, it is remarkably geologically simple, with little topographic variation (only the Darling Scarp that separates the lower coast from the higher elevation of the Darling Plateau), generally minor river systems and low levels of soil nutrients. One key factor determining complexity in the SWWA environment is the isolation of high rainfall habitats within the broader temperate region – think of islands with an island.

SSWA environment.jpg
A figure demonstrating the environmental characteristics of SWWA, using data from the Atlas of Living AustraliaLeft: An elevation map of the region, showing some mountainous variation, but only one significant steep change along the coast (blue area). Right: A summary of 19 different temperature and precipitation variables, showing a relatively weak gradient as the region shifts inland.

Despite the lack of geological complexity and the perceived diversity of the tropics, the temperate zone of SWWA is the only internationally recognised biodiversity hotspot within Australia. As an example, SWWA is inhabited by ~7,000 different plant species, half of which are endemic to the region. Not to discredit the impressive diversity of the rest of the continent, of course. So why does this area have even higher levels of species diversity and endemism than the rest of mainland Australia?

speciation patterns in SWWA.jpg
A demonstration of some of the different patterns which might explain the high biodiversity of SWWA, from Rix et al. (2015). These predominantly relate to different biogeographic mechanisms that might have driven diversification in the region, from survivors of the Gondwana era to the more recent fragmentation of mesic habitats.

Well, a number of factors may play significant roles in determining this. One of these is the ancient and isolated nature of the region: SWWA has been separated from the rest of Australia for at least 14 million years, with many species likely originating much earlier than this. Because of this isolation, species occurring within SWWA have been allowed to undergo adaptive divergence from their east coast relatives, forming unique evolutionary lineages. Furthermore, the southwest corner of the continent was one of the last to break away from Antarctica in the dismantling of Gondwana >30 million years ago. Within the region more generally, isolation of mesic (wetter) habitats from the broader, arid (xeric) habitats also likely drove the formation of new species as distributions became fragmented or as species adapted to the new, encroaching xeric habitat. Together, this varies mechanisms all likely contributed in some way to the overall diversity of the region.

The temperate south-east of Australia

Contrastingly, the temperate region in the south-east of the continent is much more complex. For one, the topography of the zone is much more variable: there are a number of prominent mountain chains (such as the extended Great Dividing Range), lowland basins (such as the expansive Murray-Darling Basin) and variable valley and river systems. Similarly, the climate varies significantly within this temperate region, with the more northern parts featuring more subtropical climatic conditions with wetter and hotter summers than the southern end. There is also a general trend of increasing rainfall and lower temperatures along the highlands of the southeast portion of the region, and dry, semi-arid conditions in the western lowland region.

MDB map
A map demonstrating the climatic variability across the Murray-Darling Basin (which makes up a large section of the SE temperate zone), from Brauer et al. (2018). The different heat maps on the left describe different types of variables; a) and b) represent temperature variables, c) and d) represent precipitation (rainfall) variables, and e) and f) represent water flow variables. Each variable is a summary of a different set of variables, hence the differences.

A complicated history

The south-east temperate zone is not only variable now, but has undergone some drastic environmental changes over history. Massive shifts in geology, climate and sea-levels have particularly altered the nature of the area. Even volcanic events have been present at some time in the past.

One key hydrological shift that massively altered the region was the paleo-megalake Bungunnia. Not just a list of adjectives, Bungunnia was exactly as it’s described: a historically massive lake that spread across a huge area prior to its demise ~1-2 million years ago. At its largest size, Lake Bungunnia reached an area of over 50,000 km­­­2, spreading from its westernmost point near the current Murray mouth although to halfway across Victoria. Initially forming due to a tectonic uplift event along the coastal edge of the Murray-Darling Basin ~3.2 million years ago, damming the ancestral Murray River (which historically outlet into the ocean much further east than today). Over the next few million years, the size of the lake fluctuated significantly with climatic conditions, with wetter periods causing the lake to overfill and burst its bank. With every burst, the lake shrank in size, until a final break ~700,000 years ago when the ‘dam’ broke and the full lake drained.

Lake Bungunnia map 2.jpg
A map demonstrating the sheer size of paleo megalake Bungunnia at it’s largest extent, taken from McLaren et al. (2012).

Another change in the historic environment readers may be more familiar with is the land-bridge that used to connect Tasmania to the mainland. Dubbed the Bassian Isthmus, this land-bridge appeared at various points in history of reduced sea-levels (i.e. during glacial periods in Pleistocene cycle), predominantly connecting via the still-above-water Flinders and Cape Barren Islands. However, at lower sea-levels, the land bridge spread as far west as King Island: central to this block of land was a large lake dubbed the Bass Lake (creative). The Bassian Isthmus played a critical role in the migration of many of the native fauna of Tasmania (likely including the Indigenous peoples of the now-island), and its submergence and isolation leads to some distinctive differences between Tasmanian and mainland biota. Today, the historic presence of the Bassian Isthmus has left a distinctive mark on the genetic make-up of many species native to the southeast of Australia, including dolphins, frogs, freshwater fishes and invertebrates.

Bass Strait bathymetric contours.jpg
An elevation (Etopo1) map demonstrating the now-underwater land bridge between Tasmania and the mainland. Orange colours denote higher areas whilst light blue represents lower sections.

Don’t underestimate the temperates

Although tropical regions get most of the hype for being hotspots of biodiversity, the temperate zones of Australia similarly boast high diversity, unique species and document a complex environmental history. Studying how the biota and environment of the temperate regions has changed over millennia is critical to predicting the future effects of climatic change across large ecosystems.