Crossing the Wires: why ‘genetic hardwiring’ is not the whole story

The age-old folly of ‘nature vs. nurture’

It should come as no surprise to any reader of The G-CAT that I’m a firm believer against the false dichotomy (and yes, I really do love that phrase) of “nature versus nurture.” Primarily, this is because the phrase gives the impression of some kind of counteracting balance between intrinsic (i.e. usually genetic) and extrinsic (i.e. usually environmental) factors and how they play a role in behaviour, ecology and evolution. While both are undoubtedly critical for adaptation by natural selection, posing this as a black-and-white split removes the possibility of interactive traits.

We know readily that fitness, the measure by which adaptation or maladaptation can be quantified, is the product of both the adaptive value of a certain trait and the environmental conditions said trait occurs in. A trait that might confer strong fitness in white environment may be very, very unfit in another. A classic example is fur colour in mammals: in a snowy environment, a white coat provides camouflage for predators and prey alike; in a rainforest environment, it’s like wearing one of those fluoro-coloured safety vests construction workers wear.

Genetics and environment interactions figure.jpg
The real Circle of Life. Not only do genes and the environment interact with one another, but genes may interact with other genes and environments may be complex and multi-faceted.

Genetically-encoded traits

In the “nature versus nurture” context, the ‘nature’ traits are often inherently assumed to be genetic. This is because genetic traits are intrinsic as a fundamental aspect of life, inheritable (and thus can be passed on and undergo evolution by natural selection) and define the important physiological traits that provide (or prevent) adaptation. Of course, not all of the genome encodes phenotypic traits at all, and even less relate to diagnosable and relevant traits for natural selection to act upon. In addition, there is a bit of an assumption that many physiological or behavioural traits are ‘hardwired’: that is, despite any influence of environment, genes will always produce a certain phenotype.

Adaptation from genetic variation.jpg
A very simplified example of adaptation from genetic variation. In this example, we have two different alleles of a single gene (orange and blue). Natural selection favours the blue allele so over time it increases in frequency. The difference between these two alleles is at least one base pair of DNA sequence; this often arises by mutation processes.

Despite how important the underlying genes are for the formation of proteins and definition of physiology, they are not omnipotent in that regard. In fact, many other factors can influence how genetic traits relate to phenotypic traits: we’ve discussed a number of these in minor detail previously. An example includes interactions across different genes: these can be due to physiological traits encoded by the cumulative presence and nature of many loci (as in quantitative trait loci and polygenic adaptation). Alternatively, one gene may translate to multiple different physiological characters if it shows pleiotropy.

Differential expression

One non-direct way genetic information can impact on the phenotype of an organism is through something we’ve briefly discussed before known as differential expression. This is based on the notion that different environmental pressures may affect the expression (that is, how a gene is translated into a protein) in alternative ways. This is a fundamental underpinning of what we call phenotypic plasticity: the concept that despite having the exact same (or very similar) genes and alleles, two clonal individuals can vary in different traits. The is related to the example of genetically-identical twins which are not necessarily physically identical; this could be due to environmental constraints on growth, behaviour or personality.

Brauer DE figure_cropped
An example of differential expression in wild populations of southern pygmy perch, courtesy of Brauer et al. (2017). In this figure, each column represents a single individual fish, with the phylogenetic tree and coloured boxes at the top indicating the different populations. Each row represents a different gene (this is a subset of 50 from a much larger dataset). The colour of each cell indicates whether the expression of that gene is expressed more (red) or less (blue) than average. As you can see, the different populations can clearly be seen within their expression profiles, with certain genes expressing more or less in certain populations.

From an evolutionary perspective, the ability to translate a single gene into multiple phenotypic traits has a strong advantage. It allows adaptation to new, novel environments without waiting for natural selection to favour adaptive mutations (or for new, adaptive alleles to become available from new mutation events). This might be a fundamental trait that determines which species can become invasive pests, for instance: the ability to establish and thrive in environments very different to their native habitat allows introduced species to quickly proliferate and spread. Even for species which we might not consider ‘invasive’ (i.e. they have naturally spread to new environments), phenotypic plasticity might allow them to very rapidly adapt and evolve into new ecological niches and could even underpin the early stages of the speciation process.

Epigenetics

Related to this alternative expression of genes is another relatively recent concept: that of epigenetics. In epigenetics, the expression and function of genes is controlled by chemical additions to the DNA which can make gene expression easier or more difficult, effectively promoting or silencing genes. Generally, the specific chemicals that are attached to the DNA are relatively (but not always) predictable in their effects: for example, the addition of a methyl group to the sequence is generally associated with the repression of the gene underlying it. How and where these epigenetic markers may in turn be affected by environmental conditions, creating a direct conduit between environmental (‘nurture’) and intrinsic genetic (‘nature’) aspects of evolution.

Epigenetic_mechanisms.jpg
A diagram of different epigenetic factors and the mechanisms by which they control gene expression. Source: Wikipedia.

Typically, these epigenetic ‘marks’ (chemical additions to the DNA) are erased and reset during fertilisation: the epigenetic marks on the parental gametes are removed, and new marks are made on the fertilised embryo. However, it has been shown that this removal process is not 100% effective, and in fact some marks are clearly passed down from parent to offspring. This means that these marks are heritable, and could allow them to evolve similarly to full DNA mutations.

The discovery of epigenetic markers and their influence on gene expression has opened up the possibility of understanding heritable traits which don’t appear to be clearly determined by genetics alone. For example, research into epigenetics suggest that heritable major depressive disorder (MDD) may be controlled by the expression of genes, rather than from specific alleles or genetic variants themselves. This is likely true for a number of traits for which the association to genotype is not entirely clear.

Epigenetic adaptation?

From an evolutionary standpoint again, epigenetics can similarly influence the ‘bang for a buck’ of particular genes. Being able to translate a single gene into many different forms, and for this to be linked to environmental conditions, allows organisms to adapt to a variety of new circumstances without the need for specific adaptive genes to be available. Following this logic, epigenetic variation might be critically important for species with naturally (or unnaturally) low genetic diversity to adapt into the future and survive in an ever-changing world. Thus, epigenetic information might paint a more optimistic outlook for the future: although genetic variation is, without a doubt, one of the most fundamental aspects of adaptability, even horrendously genetically depleted populations and species might still be able to be saved with the right epigenetic diversity.

Epigenetic cats example
A relatively simplified example of adaptation from epigenetic variation. In this example, we have a species of cat; the ‘default’ cat has non-tufted ears and an orange coat. These two traits are controlled by the expression of Genes A and B, respectively: in the top cat, neither gene is expressed. However, when this cat is placed into different environments, the different genes are “switched on” by epigenetic factors (the green markers). In a rainforest environment, the dark foliage makes darker coat colour more adaptive; switching on Gene B allows this to happen. Conversely, in a desert environment switching on Gene A causes the cat to develop tufts on its ears, which makes it more effective at hunting prey hiding in the sands. Note that in both circumstances, the underlying genetic sequence (indicated by the colours in the DNA) is identical: only the expression of those genes change.

 

Epigenetic research, especially from an ecological/evolutionary perspective, is a very new field. Our understanding of how epigenetic factors translate into adaptability, the relative performance of epigenetic vs. genetic diversity in driving adaptability, and how limited heritability plays a role in adaptation is currently limited. As with many avenues of research, further studies in different contexts, experiments and scopes will reveal further this exciting new aspect of evolutionary and conservation genetics. In short: watch this space! And remember, ‘nature is nurture’ (and vice versa)!

When “getting it wrong” is ‘right’

The nature of science

Over the course of the (relatively brief) history of this blog, I’ve covered a number of varied topics. Many of these have been challenging to write about – either because they are technically-inclined and thus require significant effort to distill down to sensibility and without jargon; or because they address personal issues related to mental health or artistic expression. But despite the nature of these posts, this week’s blog has proven to be one of the most difficult to write, largely because it demands a level of personal vulnerability, acceptance of personality flaws and a potentially self-deprecating message. Alas, I find myself unable to ignore my own perceived importance of the topic.

It should come as no surprise to any reader, whether scientifically trained or not, that the expectation of scientific research is one of total objectivity, clarity and accuracy. Scientific research that is seen not to meet determined quotas of these aspects is undoubtedly labelled ‘bad science’. Naturally, of course, we aim to maximise the value of our research by addressing these as best as can be conceivably possible. Therein, however, lies the limitation: we cannot ever truly be totally objective, nor clear, nor accurate with research, and acceptance and discussion of the limitations of research is a vital aspect of any paper.

The imperfections of science

The basic underpinning of this disjunction lies with the people that conduct the science. Because while the scientific method has been developed and improved over centuries to be as objective, factual and robust as possible, the underlying researchers will always be plagued to some degree by subjectivism. Whether we consciously mean to or not, our prior beliefs, perceptions and history influence the way we conduct or perceive science (hopefully, only to a minor extent).

Inherent biases figure
How the different aspects of ourselves can influence our research. The scientific method directly addresses the more objective aspects (highlighted in green arrows), but other subjective concepts may cause bias. Ideally, however, the objective parts outweigh the subjective ones (indicated by the size of the arrows), and is helped by peer-review as a process.

 

Additionally, one of the drawbacks of being mortal is that we are prone to making mistakes. Biology is never perfect, and the particularly complex tasks and ideas we assign ourselves to research inevitably involve some level of incorrectness. But while that may seem to fundamentally contradict the nature of science, I argue that is in fact not just a reality of scientific research, but also a necessity for progression.

Impostor syndrome

One widely realised manifestation of this disjunction between idealistic science and practical science, and one particularly felt by researchers in training such as post-graduate students, is referred to as ‘impostor syndrome’. This involves the sometimes subversive (and sometimes more overtly) feeling of inadequacy when we compare ourselves to a wider crowd. It is the feeling of not belonging in a particular social or professional group due to a lack of experience, talent or other ‘right’ characteristics. This is particularly pervasive in postgraduate students as we inevitably interact and compare ourselves to those we aspire to be like – postdoctoral researchers, professors, or other more established researchers – who are naturally more experienced in the field. The jarring disjunction of our own capability, often inaccurately assumed to be a proxy of intelligence, leads many to feel incapable or inadequate to be a ‘real’ scientist.

imposter syndrome.jpg
I’d explain impostor syndrome as “feeling like being three kids stacked in a lab coat instead of a ‘real scientist’.”

It cannot be overstated that impostor syndrome is often the result of mental health issues and a high-pressure, demanding academic system, and rarely a rational perception. In many cases, we see only the best aspects of scientific research (both for academic students and the general public), a rose-coloured view of process. What we don’t see, however, is the series of failures and missteps that have led to even the best of scientific outcomes, and may assume that they didn’t happen. This is absolutely false.

Analysis paralysis

Other tangible impacts of impostor syndrome and self-induced perfectionism is the suppression of progressive work. By this I mean the typical ‘procrastinating’ behaviour that comes about from perfectionism: that we often prevent ourselves from moving forward if we perceive that there might be (however minor) issues with our work. Within science, this often involves inane amounts of reading and preparing on how to run an analysis without actually running anything. This is what has been called ‘analysis paralysis’, and disguises inactivity under the pretence that the student is still learning the ropes.

The reality is that trying to predict the multitude of factors and problems one can run into when conducting an analysis is a monolithic task. Some aspects relevant to a particular dataset or analysis are unlikely to be discussed or clearly referenced in the literature, and thus difficult to anticipate. Problem solving is often more effective as a reactive, rather than proactive, measure by allowing researchers to respond to an issue when it arises instead of getting bogged down in the astronomical realm of “things that could possibly go wrong.”

Drawing on personal experience, this has led to literal months of reading and preparing data for running models only to have the first dozens of attempts not run or run incorrectly due to something as trivial as formatting. The lesson learnt is that I should have just tried to run the analysis early, stuffed it all up, and learnt from the mistakes with a little problem solving. No matter how much reading I did, or ever could do, some of these mistakes would never have been able to be explicitly predicted a priori.

analysis error messages collage.jpg
Sometimes it feels like analysis is 90% “why didn’t this work?!” I think that’s realistic, though.

Why failure is conducive to better research

While we should always strive to be as accurate and objective as possible, sometimes this can be counteractive to our own learning progress. The rabbit holes of “things that could possibly go wrong” run very, very deep and if you fall down them, you’ll surely end up in a bizarre world full of odd distractions, leaps of logic and insanity. Under this circumstance, I suggest allowing yourself to get it wrong: although repeated failures are undoubtedly damaging to the ego and confidence, giving ourselves the opportunity to make mistakes and grow from them ultimately allows us to become more productive and educated than if we avoided them altogether.

Alice in Wonderland analogy
“We’re all mad here.”

Speaking at least from a personal anecdote (although my story appears corroborated with other students’ experiences), some level of failure is critical to the learning process and important for scientific development generally. Although cliché, “learning from our mistakes” is inevitably one of the most effective and quickest ways to learn and allowing ourselves to be imperfect, a little inaccurate or at time foolish is conducive to better science.

Allow yourself to stuff things up. You’ll do it way less in the future if you do.