Earlier in the year, I had made a comment that, as part of the natural evolution of this blog, I would try to change up the writing format every now and then to something a little more personal, emotional and potentially derivative from science. I must confess that this is one of those weeks, as it’s been an emotional rollercoaster for me. So, sorry in advance for the potentially self-oriented, reflective nature of this piece.
Contrastingly, sometimes we might also use genetic information to do the exact opposite. While so many species on Earth are at risk (or have already passed over the precipice) of extinction, some have gone rogue with our intervention. These are, of course, invasive species; pests that have been introduced into new environments and, by their prolific nature, start to throw out the balance of the ecosystem. Australians will be familiar with no shortage of relevant invasive species; the most notable of which is the cane toad, Rhinella marina. However, there are a plethora of invasive species which range from notably prolific (such as the cane toad) to the seemingly mundane (such as the blackbird): so how can we possibly deal with the number and propensity of pests?
Tools for invasive species management
There are a number of tools at our disposal for dealing with invasive species. These range from chemical controls (like pesticides), to biological controls and more recently to targeted genetic methods. Let’s take a quick foray into some of these different methods and their applications to pest control.
The potential secondary impact of biological controls, and the degree of unpredictability in how they will respond to a new environment (and how native species will also respond to their introduction) leads conservationists to develop new, more specific techniques. In similar ways, viral and bacterial-based controls have had limited success (although are still often proposed in conservation management, such as the planned carp herpesvirus release).
The better we understand invasive species and populations from a genetic perspective, the more informed our management efforts can be and the more likely we are to be able to adequately address the problem.
Managing invasive pest species
The impact of human settlement into new environments is exponentially beyond our direct influences. With our arrival, particularly in the last few hundred years, human migration has been an effective conduit for the spread of ecologically-disastrous species which undermine the health and stability of ecosystems around the globe. As such, it is our responsibility to Earth to attempt to address our problems: new genetic techniques is but one growing avenue by which we might be able to remove these invasive pests.
Beyond the apparent ethical and moral objections to the invasive nature of demanding genetic testing for Indigenous peoples, a crucial question is one of feasibility: even if you decided to genetically test for race, is this possible? It might come as a surprise to non-geneticists that actually, from a genetic perspective, race is not a particularly stable concept.
This is exponentially difficult for people who might have fewer sequenced ancestors or relatives; without the reference for genetic variation, it can be even harder to trace their genetic ancestry. Such is the case for Indigenous Australians, for which there is a distinct lack of available genetic data (especially compared to European-descended Australians).
The non-genetic components
The genetic non-identifiability of race is but one aspect which contradicts the rationality of genetic race testing. As we discussed in the previous post on The G-CAT, the connection between genetic underpinning and physicality is not always clear or linear. The role of the environment on both the expression of genetic variation, as well as the general influence of environment on aspects such as behaviour, philosophy, and culture necessitate that more than the genome contributes to a person’s identity. For any given person, how they express and identify themselves is often more strongly associated with their non-genetic traits such as beliefs and culture.
These factors cannot reliably be tested under a genetic framework. While there may be some influence of genes on how a person’s psychology develops, it is unlikely to be able to predict the lifestyle, culture and complete identity of said person. For Indigenous Australians, this has been confounded by the corruption and disruption of their identity through the Stolen Generation. As a result, many Indigenous descendants may not appear (from a genetic point of view) to be purely Indigenous but their identity and culture as an Indigenous person is valid. To suggest that their genetic ancestry more strongly determines their identity than anything else is not only naïve from a scientific perspective, but nothing short of a horrific simplification and degradation of those seeking to reclaim their identity and culture.
The non-identifiability of genetic race
The science of genetics overwhelmingly suggests that there is no fundamental genetic underpinning of ‘race’ that can be reliably used. Furthermore, the impact of non-genetic factors on determining the more important aspects of personal identity, such as culture, tradition and beliefs, demonstrates that attempts to delineate people into subcategories by genetic identity is an unreliable method. Instead, genetic research and biological history fully acknowledges and embraces the diversity of the global human population. As it stands, the phrase ‘human race’ might be the most biologically-sound classification of people: we are all the same.
It should come as no surprise to any reader of The G-CAT that I’m a firm believer against the false dichotomy (and yes, I really do love that phrase) of “nature versus nurture.” Primarily, this is because the phrase gives the impression of some kind of counteracting balance between intrinsic (i.e. usually genetic) and extrinsic (i.e. usually environmental) factors and how they play a role in behaviour, ecology and evolution. While both are undoubtedly critical for adaptation by natural selection, posing this as a black-and-white split removes the possibility of interactive traits.
Despite how important the underlying genes are for the formation of proteins and definition of physiology, they are not omnipotent in that regard. In fact, many other factors can influence how genetic traits relate to phenotypic traits: we’ve discussed a number of these in minor detail previously. An example includes interactions across different genes: these can be due to physiological traits encoded by the cumulative presence and nature of many loci (as in quantitative trait loci and polygenic adaptation). Alternatively, one gene may translate to multiple different physiological characters if it shows pleiotropy.
From an evolutionary standpoint again, epigenetics can similarly influence the ‘bang for a buck’ of particular genes. Being able to translate a single gene into many different forms, and for this to be linked to environmental conditions, allows organisms to adapt to a variety of new circumstances without the need for specific adaptive genes to be available. Following this logic, epigenetic variation might be critically important for species with naturally (or unnaturally) low genetic diversity to adapt into the future and survive in an ever-changing world. Thus, epigenetic information might paint a more optimistic outlook for the future: although genetic variation is, without a doubt, one of the most fundamental aspects of adaptability, even horrendously genetically depleted populations and species might still be able to be saved with the right epigenetic diversity.
Over the course of the (relatively brief) history of this blog, I’ve covered a number of varied topics. Many of these have been challenging to write about – either because they are technically-inclined and thus require significant effort to distill down to sensibility and without jargon; or because they address personal issues related to mental health or artistic expression. But despite the nature of these posts, this week’s blog has proven to be one of the most difficult to write, largely because it demands a level of personal vulnerability, acceptance of personality flaws and a potentially self-deprecating message. Alas, I find myself unable to ignore my own perceived importance of the topic.
It should come as no surprise to any reader, whether scientifically trained or not, that the expectation of scientific research is one of total objectivity, clarity and accuracy. Scientific research that is seen not to meet determined quotas of these aspects is undoubtedly labelled ‘bad science’. Naturally, of course, we aim to maximise the value of our research by addressing these as best as can be conceivably possible. Therein, however, lies the limitation: we cannot ever truly be totally objective, nor clear, nor accurate with research, and acceptance and discussion of the limitations of research is a vital aspect of any paper.
The imperfections of science
The basic underpinning of this disjunction lies with the people that conduct the science. Because while the scientific method has been developed and improved over centuries to be as objective, factual and robust as possible, the underlying researchers will always be plagued to some degree by subjectivism. Whether we consciously mean to or not, our prior beliefs, perceptions and history influence the way we conduct or perceive science (hopefully, only to a minor extent).
Additionally, one of the drawbacks of being mortal is that we are prone to making mistakes. Biology is never perfect, and the particularly complex tasks and ideas we assign ourselves to research inevitably involve some level of incorrectness. But while that may seem to fundamentally contradict the nature of science, I argue that is in fact not just a reality of scientific research, but also a necessity for progression.
It cannot be overstated that impostor syndrome is often the result of mental health issues and a high-pressure, demanding academic system, and rarely a rational perception. In many cases, we see only the best aspects of scientific research (both for academic students and the general public), a rose-coloured view of process. What we don’t see, however, is the series of failures and missteps that have led to even the best of scientific outcomes, and may assume that they didn’t happen. This is absolutely false.
The reality is that trying to predict the multitude of factors and problems one can run into when conducting an analysis is a monolithic task. Some aspects relevant to a particular dataset or analysis are unlikely to be discussed or clearly referenced in the literature, and thus difficult to anticipate. Problem solving is often more effective as a reactive, rather than proactive, measure by allowing researchers to respond to an issue when it arises instead of getting bogged down in the astronomical realm of “things that could possibly go wrong.”
Drawing on personal experience, this has led to literal months of reading and preparing data for running models only to have the first dozens of attempts not run or run incorrectly due to something as trivial as formatting. The lesson learnt is that I should have just tried to run the analysis early, stuffed it all up, and learnt from the mistakes with a little problem solving. No matter how much reading I did, or ever could do, some of these mistakes would never have been able to be explicitly predicted a priori.
Why failure is conducive to better research
While we should always strive to be as accurate and objective as possible, sometimes this can be counteractive to our own learning progress. The rabbit holes of “things that could possibly go wrong” run very, very deep and if you fall down them, you’ll surely end up in a bizarre world full of odd distractions, leaps of logic and insanity. Under this circumstance, I suggest allowing yourself to get it wrong: although repeated failures are undoubtedly damaging to the ego and confidence, giving ourselves the opportunity to make mistakes and grow from them ultimately allows us to become more productive and educated than if we avoided them altogether.
Speaking at least from a personal anecdote (although my story appears corroborated with other students’ experiences), some level of failure is critical to the learning process and important for scientific development generally. Although cliché, “learning from our mistakes” is inevitably one of the most effective and quickest ways to learn and allowing ourselves to be imperfect, a little inaccurate or at time foolish is conducive to better science.
Allow yourself to stuff things up. You’ll do it way less in the future if you do.
Note: For some clear, interesting presentations on the topic of de-extinction, and where some of the information for this post comes from, check out this list of TED talks.
The current conservation crisis
The stark reality of conservation in the modern era epitomises the crisis disciplinethat so often is used to describe it: species are disappearing at an unprecedented rate, and despite our best efforts it appears that they will continue to do so. The magnitude and complexity of our impacts on the environment effectively decimates entire ecosystems (and indeed, the entire biosphere). It is thus our responsibility as ‘custodians of the planet’ (although if I had a choice, I would have sacked us as CEOs of this whole business) to attempt to prevent further extinction of our planet’s biodiversity.
There’s one catch (well, a few really) with genetic rescue: namely, that one must have other populations to ‘outbreed’ with in order add genetic variation to the captive population. But what happens if we’re too late? What if there are no other populations to supplement with, or those other populations are also too genetically depauperate to use for genetic rescue?
Believe it or not, sometimes it’s not too late to save species, even after they have gone extinct. Which brings us from this (lengthy) introduction to this week’s topic: de-extinction. Yes, we’re literally (okay, maybe not) going to raise the dead.
Backbreeding: resurrection by hybridisation
You might wonder how (or even if!) this is possible. And to be frank, it’s extraordinarily difficult. However, it has to a degree been done before, in very specific circumstances. One scenario is based on breeding out a species back into existence: sometimes we refer to this as ‘backbreeding’.
This practice really only applies in a few select scenarios. One requirement for backbreeding to be possible is that hybridisation across species has to have occurred in the past, and generally to a substantial scale. This is important as it allows the genetic variation which defines one of those species to live on within the genome of its sister species even when the original ‘host’ species goes extinct. That might make absolutely zero sense as it stands, so let’s dive into this with a case study.
One of these species, Chelonoidis elephantopus, also known as the Floreana tortoise after their home island, went extinct over 150years ago, likely due to hunting and trade. However, before they all died, some individuals were transported to another island (ironically, likely by mariners) and did the dirty with another species of tortoise: C. becki. Because of this, some of the genetic material of the extinct Floreana tortoiseintrogressed into the genome of the still-living C. becki. In an effort to restore an iconic species, scientists from a number of institutions attempted to do what sounds like science-fiction: breed the extinct tortoise back to life.
When you saw the title for this post, you were probably expecting some Jurassic Parklevel ‘dinosaurs walking on Earth again’ information. I know I did when I first heard the term de-extinction. Unfortunately, contemporary de-extinction practices are not that far advanced just yet, although there have been some solid attempts. Experiments conducted using the genomic DNA from the nucleus of a dead animal, and cloning it within the egg of another living member of that species has effectively cloned an animal back from the dead. This method, however, is currently limited to animals that have died recently, as the DNA degrades beyond use over time.
One might expect that as genomic technologies improve, particularly methods facilitated by the genome-editing allowed from CRISPR/Cas-9 development, that we might one day be able to truly resurrect an extinct species. But this leads to very strongly debated topics of ethics and morality of de-extinction. If we can bring a species back from the dead, should we? What are the unexpected impacts of its revival? How will we prevent history from repeating itself, and the species simply going back extinct? In a rapidly changing world, how can we account for the differences in environment between when the species was alive and now?
There is no clear, simple answer to many of these questions. We are only scratching the surface of the possibility of de-extinction, and I expect that this debate will only accelerate with the research. One thing remains eternally true, though: it is still the distinct responsibility of humanity to prevent more extinctions in the future. Handling the growing climate change problem and the collapse of ecosystems remains a top priority for conservation science, and without a solution there will be no stable planet on which to de-extinct species.
Further to this, we can expand the site-frequency spectrum to compare across populations. Instead of having a simple 1-dimensional frequency distribution, for a pair of populations we can have a grid. This grid specifies how often a particular allele occurs at a certain frequency in Population A and at a different frequency in Population B. This can also be visualised quite easily, albeit as a heatmap instead. We refer to this as the 2-dimensional SFS (2DSFS).
The same concept can be expanded to even more populations, although this gets harder to represent visually. Essentially, we end up with a set of different matrices which describe the frequency of certain alleles across all of our populations, merging them together into the joint SFS. For example, a joint SFS of 4 populations would consist of 6 (4 x 4 total comparisons – 4 self-comparisons, then halved to remove duplicate comparisons) 2D SFSs all combined together. To make sense of this, check out the diagrammatic tables below.
The different forms of the SFS
Which alleles we choose to use within our SFS is particularly important. If we don’t have a lot of information about the genomics or evolutionary history of our study species, we might choose to use the minor allele frequency (MAF). Given that SNPs tend to be biallelic, for any given locus we could have Allele A or Allele B. The MAF chooses the least frequent of these two within the dataset and uses that in the summary SFS: since the other allele’s frequency would just be 2N – the frequency of the other allele, it’s not included in the summary. An SFS made of the MAF is also referred to as the folded SFS.
Alternatively, if we know some things about the genetic history of our study species, we might be able to divide Allele A and Allele B into derived or ancestral alleles. Since SNPs often occur as mutations at a single site in the DNA, one allele at the given site is the new mutation (the derived allele) whilst the other is the ‘original’ (the ancestral allele). Typically, we would use the derived allele frequency to construct the SFS, since under coalescent theory we’re trying to simulate that mutation event. An SFS made of the derived alleles only is also referred to as the unfolded SFS.
Applications of the SFS
How can we use the SFS? Well, it can moreorless be used as a summary of genetic variation for many types of coalescent-based analyses. This means we can make inferences of demographic history (see here for more detailed explanation of that) without simulating large and complex genetic sequences and instead use the SFS. Comparing our observed SFS to a simulated scenario of a bottleneck and comparing the expected SFS allows us to estimate the likelihood of that scenario.
The SFS can even be used to detect alleles under natural selection. For strongly selected parts of the genome, alleles should occur at either high (if positively selected) or low (if negatively selected) frequency, with a deficit of more intermediate frequencies.
Adding to the analytical toolbox
The SFS is just one of many tools we can use to investigate the demographic history of populations and species. Using a combination of genomic technologies, coalescent theory and more robust analytical methods, the SFS appears to be poised to tackle more nuanced and complex questions of the evolutionary history of life on Earth.
For anyone who absorbs their news and media through the Internet (hello, welcome to the 21st Century), you would undoubtedly be familiar with a few frustrating and disingenuous aspects of media such as clickbait headlines and targeted advertising. Another one that might aggravate the common reader is Ol’ Reliable, the paywall – blocking access to an article unless some volume of money is transferred to the publisher, usually through a subscription basis. You might argue that this is a necessary evil, or that rewarding well-written pieces and informative journalism through monetary means might lead to the free market starving poor media (extremely optimistically). Or you might argue that the paywall is morally corrupt and greedy, and just another way to extort money out of hapless readers.
Accessibility in science
I’m loathe to tell that you that even science, the powerhouse of objectivity with peer-review to increase accountability, is stifled by the weight of corporate greed. You may notice this from some big name journals, like Natureand Science – articles cost money to access, either at the individual level (e.g. per individual article, or as a subscription for a single person for a year) or for an entire institution (such as universities). To state that these paywalls are exorbitantly priced would be a tremendous understatement – for reference, an institution subscription to the single journal Nature (one of 2,512 journals listed under the conglomerate of Springer Nature) costs nearly $8,000 per year. A download of a single paper often costs around $30 for a curious reader.
Some myths about the publishing process
You might be under the impression, as above, that this money goes towards developing good science and providing a support network for sharing and distributing scientific research. I wish you were right. In his book ‘The Effective Scientist’, Professor Corey Bradshaw describes the academic publishing process as “knowledge slavery”, and no matter how long I spend thinking about this blog post would I ever come up with a more macabre yet apt description. And while I highly recommend his book for a number of reasons, his summary and interpretation of how publishing in science actually works (both the strengths and pitfalls) is highly informative and representative.
There are a number of different aspects about publishing in science that make it so toxic to researchers. For example, the entirety of the funds acquired from the publishing process goes to the publishing institution – none of it goes to the scientists that performed and wrote the work, none to the scientists who reviewed and critiqued the paper prior to publication, and none to the institutions who provided the resources to develop the science. In fact, the perception is that if you publish science in a journal, especially high-ranking ones, it should be an honour just to have your paper in that journal. You got into Nature – what more do you want?
Open Access journals
Thankfully, some journals exist which publish science without the paywall: we refer to these as ‘Open Access’ (OA) journals. Although the increased accessibility is undoubtedly a benefit for the spread of scientific knowledge, the reduced revenue often means that a successful submission comes with an associated cost. This cost is usually presented as an ‘article processing charge’: for a paper in a semi-decent journal, this can be upwards of thousands of dollars for a single paper. Submitting to an OA journal can be a bit of a delicate balance: the increased exposure, transparency and freedom to disseminate research is a definite positive for scientists, but the exorbitant costs that can be associated with OA journals can preclude less productive or financially robust labs from publishing in them (regardless of the quality of science produced).
Manuscripts and ArXives
There is somewhat of a counter culture to the rigorous tyranny of scientific journals: some sites exist where scientists can freely upload their manuscripts and articles without a paywall or submission cost. Naturally, the publishing industry reviles this and many of these are not strictly legal (since you effectively hand over almost all publishing rights to the journal at submission). The most notable of these is Sci-Hub, which uses various techniques (including shifting through different domain names in different countries) to bypass paywalls.
Other more user-generated options exist, such as the different subcategories of ArXiv, where users can upload their own manuscripts free of charge and without a paywall and predominantly prior to the peer-review process. By being publically uploaded, ArXiv sites allow scientists to broaden the peer-review process beyond a few journal-selected reviewers. There is still some screening process when submitting to ArXiv to filter out non-scientific articles, but the overall method is much more transparent and scientist-friendly than a typical publishing corporation. For articles that have already been published, other sites such as Researchgate often act as conduits for sharing research (either those obscured by paywalls, despite copyright issues, or those freely accessible by open access).
You might also have heard through the grapevine that “scientists are allowed to send you PDFs of their research if you email them.” This is a bit of a dubious copyright loophole: often, this is not strictly within the acceptable domain of publishing rights as the journal that has published this research maintains all copyrights to the work (clever). Out of protest, many scientists may send their research to interested parties, often with the caveat of not sharing it anywhere else or in manuscript form (as opposed to the finalised published article). Regardless, scientists are more than eager to share their research however they can.
Civil rights and access to science
There are a number of both empirical and philosophical reasons why free access to science is critically important for all people. At least one of these (among many others) is based on your civil rights. Scientific research is incredibly expensive, and is often funded through a number of grants from various sources, among the most significant of which includes government-funded programs such as the Australian Research Council (ARC).
Where does this money come from? Well, indirectly, you (if you pay your taxes, anyway). While this connection can be at times frustrating for scientists – particularly if there is difficulty in communicating the importance of your research due to a lack of or not-readily-transparent commercial, technological or medical impact of the work – the logic applies to access to scientific data and results, too. As someone who has contributed monetarily to the formation and presentation of scientific work, it is your capitalist right to have access to the results of that work. Although privatisation ultimately overpowers this in the publishing world, there is (in my opinion) a strong moral philosophy behind demanding access to the results of the research you have helped to fund.
Walled off from research
Anyone who has attempted to publish in the scientific literature is undoubtedly keenly aware of the overt corruption and inadequacy of the system. Private businesses hold a monopoly on the dissemination of scientific research, and although science tries to overcome this process, it is a pervasive structure. However, some changes are in process which are seeking to re-invent the way we handle the publishing of scientific research and with strong support from the general public there is opportunity to minimise the damage that private publication businesses proliferate.
Australia is renowned for its unique diversity of species, and likewise for the diversity of ecosystems across the island continent. Although many would typically associate Australia with the golden sandy beaches, palm trees and warm weather of the tropical east coast, other ecosystems also hold both beautiful and interesting characteristics. Even the regions that might typically seem the dullest – the temperate zones in the southern portion of the continent – themselves hold unique stories of the bizarre and wonderful environmental history of Australia.
The two temperate zones
Within Australia, the temperate zone is actually separated into two very distinct and separate regions. In the far south-western corner of the continent is the southwest Western Australia temperate zone, which spans a significant portion. In the southern eastern corner, the unnamed temperate zone spans from the region surrounding Adelaide at its westernmost point, expanding to the east and encompassing Tasmanian and Victoria before shifting northward into NSW. This temperate zones gradually develops into the sub-tropical and tropical climates of more northern latitudes in Queensland and across to Darwin.
The divide separating these two regions might be familiar to some readers – the Nullarbor Plain. Not just a particularly good location for fossils and mineral ores, the Nullarbor Plain is an almost perfectly flat arid expanse that stretches from the western edge of South Australia to the temperate zone of the southwest. As the name suggests, the plain is totally devoid of any significant forestry, owing to the lack of available water on the surface. This plain is a relatively ancient geological structure, and finished forming somewhere between 14 and 16 million years ago when tectonic uplift pushed a large limestone block upwards to the surface of the crust, forming an effective drain for standing water with the aridification of the continent. Thus, despite being relatively similar bioclimatically, the two temperate zones of Australia have been disconnected for ages and boast very different histories and biota.
The hotspot of the southwest
The southwest temperate zone – commonly referred to as southwest Western Australia (SWWA) – is an island-like bioregion. Isolated from the rest of the temperate Australia, it is remarkably geologically simple, with little topographic variation (only the Darling Scarp that separates the lower coast from the higher elevation of the Darling Plateau), generally minor river systems and low levels of soil nutrients. One key factor determining complexity in the SWWA environment is the isolation of high rainfall habitats within the broader temperate region – think of islands with an island.
Contrastingly, the temperate region in the south-east of the continent is much more complex. For one, the topography of the zone is much more variable: there are a number of prominent mountain chains (such as the extended Great Dividing Range), lowland basins (such as the expansive Murray-Darling Basin) and variable valley and river systems. Similarly, the climate varies significantly within this temperate region, with the more northern parts featuring more subtropical climatic conditions with wetter and hotter summers than the southern end. There is also a general trend of increasing rainfall and lower temperatures along the highlands of the southeast portion of the region, and dry, semi-arid conditions in the western lowland region.
A complicated history
The south-east temperate zone is not only variable now, but has undergone some drastic environmental changes over history. Massive shifts in geology, climate and sea-levels have particularly altered the nature of the area. Even volcanic events have been present at some time in the past.
One key hydrological shift that massively altered the region was the paleo-megalake Bungunnia. Not just a list of adjectives, Bungunnia was exactly as it’s described: a historically massive lake that spread across a huge area prior to its demise ~1-2 million years ago. At its largest size, Lake Bungunnia reached an area of over 50,000 km2, spreading from its westernmost point near the current Murray mouth although to halfway across Victoria. Initially forming due to a tectonic uplift event along the coastal edge of the Murray-Darling Basin ~3.2 million years ago, damming the ancestral Murray River (which historically outlet into the ocean much further east than today). Over the next few million years, the size of the lake fluctuated significantly with climatic conditions, with wetter periods causing the lake to overfill and burst its bank. With every burst, the lake shrank in size, until a final break ~700,000 years ago when the ‘dam’ broke and the full lake drained.
Another change in the historic environment readers may be more familiar with is the land-bridge that used to connect Tasmania to the mainland. Dubbed the Bassian Isthmus, this land-bridge appeared at various points in history of reduced sea-levels (i.e. during glacial periods in Pleistocene cycle), predominantly connecting via the still-above-water Flinders and Cape Barren Islands. However, at lower sea-levels, the land bridge spread as far west as King Island: central to this block of land was a large lake dubbed the Bass Lake (creative). The Bassian Isthmus played a critical role in the migration of many of the native fauna of Tasmania (likely including the Indigenous peoples of the now-island), and its submergence and isolation leads to some distinctive differences between Tasmanian and mainland biota. Today, the historic presence of the Bassian Isthmus has left a distinctive mark on the genetic make-up of many species native to the southeast of Australia, including dolphins, frogs, freshwater fishes and invertebrates.
Don’t underestimate the temperates
Although tropical regions get most of the hype for being hotspots of biodiversity, the temperate zones of Australia similarly boast high diversity, unique species and document a complex environmental history. Studying how the biota and environment of the temperate regions has changed over millennia is critical to predicting the future effects of climatic change across large ecosystems.
This is based on the idea that for genes that are not related to traits under selection (either positively or negatively), new mutations should be acquired and lost under predominantly random patterns. Although this accumulation of mutations is influenced to some degree by alternate factors such as population size, the overall average of a genome should give a picture that largely discounts natural selection. But is this true? Is the genome truly neutral if averaged?
First, let’s take a look at what we mean by neutral or not. For genes that are not under selection, alleles should be maintained at approximately balanced frequencies and all non-adaptive genes across the genome should have relatively similar distribution of frequencies. While natural selection is one obvious way allele frequencies can be altered (either favourably or detrimentally), other factors can play a role.
The extent of this linkage effect depends on a number of other factors such as ploidy (the number of copies of a chromosome a species has), the size of the population and the strength of selection around the central locus. The presence of linkage and its impact on the distribution of genetic diversity (LD) has been well documented within evolutionary and ecological genetic literature. The more pressing question is one of extent: how much of the genome has been impacted by linkage? Is any of the genome unaffected by the process?
Although I avoid having a strong stance here (if you’re an evolutionary geneticist yourself, I will allow you to draw your own conclusions), it is my belief that the model of neutral theory – and the methods that rely upon it – are still fundamental to our understanding of evolution. Although it may present itself as a more conservative way to identify adaptation within the genome, and cannot account for the effect of the above processes, neutral theory undoubtedly presents itself as a direct and well-implemented strategy to understand adaptation and demography.