PDE4A5 signalling impairs hippocampal synaptic plasticity and long-term memory

Posted comment on ´Compartmentalized PDE4A5 Signaling Impairs Hippocampal Synaptic Plasticity and Long-Term Memory` by B.Y. R Havekes, A.J. Park, R.E. Tolentino, V.M Bruinenberg, J.C. Tudor, Y. Lee, R.T. Hansen, L.A. Guercio, E. Linton, S.R. Neves-Zaph, P. Meerlo, G.S. Baillie, M.D. Houslay and T. Abel and published in Journal of Neuroscience 24 Aug 2016 36(34) 8936 – doi 10.1523/JNEUROSCI.0248-16.2016


Havekes and colleagues investigated the link between the binding of specific compartmentalized cAMP-specific phosphodiesterase 4 (PDE4) isoforms in mouse excitatory hippocampal neurons and cognitive changes associated with some neurological disorders. Expression levels of PDE4 isoforms are known to be altered in traumatic brain injury, autism, schizophrenia, bipolar disorder for example as well as being affected by ECT and antidepressant treatment. It is also known that the PDE4 isoforms exert their influence on cognitive capability by binding via their N terminals to specific protein complexes and affecting degradation of cAMP in specific intracellular compartments. In order to investigate the effect of the PDE4 isoforms in the hippocampal cells, Havekes and colleagues altered PDE4A5 and PDE4A1 expression in mice and performed various in vivo cognitive tests eg. object–place recognition task, fear-conditioning task, open field task, and zero maze task and several in vitro tests on cultured cells such as electrophysiology and fluorescence resonance energy transfer sensory imaging.

Havekes and colleagues found that virally induced PDE4A5 expression was observed in the excitatory neurons in hippocampus, but not in the astrocytes. This expression led to increased PDE4 activity in the hippocampus inducing reduced cAMP levels in this area, but not in the prefrontal cortex or cerebellum. The cAMP effect was not overall, but specific for certain intracellular compartments. The increased PDE4A5 protein levels were found not to alter basal synaptic transmission in the Schaffer collateral-CA1 pathway, but decreased synaptic potentiation.

They also investigated the link between PDE4A5 level and long-term context-shock associations and found that selective overexpression of PDE4A5 attentuated long-term memory. Increased protein levels did not affect freezing levels during training, but decreased freezing levels were observed when the mice were re-exposed after the conditioning training period. This result was explained by short-term memories not needing cAMP signaling whereas long-term memories did. Hence, PDE4A was said to lead to impairment of hippocampal plasticity resulting in long-term memory problems. Repeating the context shock associations with tone-cued fear conditioning instead (a process that uses the amygdala region rather than the hippocampus due to the fear element of the electric shock) found similar freezing levels under all conditions. Therefore, it was concluded that tone-cued fear conditioning is not affected by PDE4A5 levels in the hippocampus.

The authors also looked at the effect of PDE4A5 levels on performance of the object-location memory task in mice. They found that mice expressing eGFP or PDE4A5 reduced exploratory behaviour during training as they learnt the locations of objects. After learning, eGFP mice could remember the locations, but mice overexpressing the PDE4A5 protein demonstrated reduced memory and explored all of the objects to the same extent. In the case of the novel object recognition task, mice with both eGFP and PDE4A5 over-expression demonstrated the same exploration of novel objects showing that they could determine novel objects from familiar ones. An investigation of cAMP responses using the ICUE3 biosensor in hippocampal neurons expressing a control vector and full-length PDE4A5 found that baseline FRET responses were not affected by the overexpression. The attenuated forskolin-mediated FRET response could be normalized by application with the PDE inhibitor IBMX which suggested that the decrease in FRET response was due to the overexpression of PDE4A5 and not a result of nonspecific alterations in PDE/cAMP signaling.

It is known that the N terminal of the PDE4 isoforms is important for PDE4 binding to complex groups and Havekes and colleagues investigated if PDE4A5 also requires the N terminal for the context-shock results. The PDE4 isoform was truncated at the N terminal at 303bp and no impairment of long-term memory was found in this test. A repetition of the object-place memory test also found no difference between the eGFP and PDE4A5 over-expression animals. The investigation of cAMP responses using ICUE3 biosensor in hippocampal neurons expressing a control vector and full-length PDE4A5 which led to the attenuated forskolin-mediated FRET response which could be normalized by the application of the PDE inhibitor IBMX was also not observed with the truncated version. These investigations supported the view that the N terminal was important for the placement of the PDE4 isoform in the cell. Using fluorescent imaging, Havekes and colleagues found there was different intracellular distribution between the full and truncated versions. The full length form was found in discrete perinucleur areas and the dendritic compartments, whereas the truncated version was found predominately only in the former.

Havekes and colleagues also investigated another PDE4 isoform that of the PDE4A1. They found differences between PDE4A5 and PDE4A1 with the PDE4A5 isoform being membrane associated, whereas 4A1 was located in the Golgi. The investigators also found that overexpression of PDE4A1 produced no change in memory when tested using the object-location memory test. Hence, it was suggested that PDE4A1 does not target protein complexes critical for the formation of object location memories and that the two 4A5 and 4A1 isoforms affect different cellular compartments.

With the link between PDE4A5, its overexpression, cAMP increase and cognitive disorders being established, the authors concluded their article by suggesting that instigating N terminal changes would produce an alternative method of regulating the PDE4A5 cellular level. This method would be welcomed as an alternative to using the broad PDE4 inhibitors which cause such undesirable side effects such as diarrheoa and emesis.


What makes Havekes and colleagues article interesting is that it investigates indirectly the role of cyclic adenosine monophosphate (cAMP) in hippocampal cells and memory and perhaps gives an indication of one of the elements required in the process of ´switching off` an active cell once the synaptic stimulation is over. The article looks at the binding of cAMP-specific phosphodiesterase 4 (PDE4) isoforms to specific proteins in identified compartments of the post-synaptic regions of excitatory neurons in the mouse hippocampus. The authors found that expression of the PDE 4 gene leads to production of the protein and its subsequent specific binding to intracellular proteins results in a reduction in cellular cAMP level. Further investigation by the authors showed that this was a specific effect to one isoform of the PDE4 protein (the A5) and binding required a functional N terminal. Negative effects on cognition were attributed to this N terminal binding such as interaction with beta-arrestins, a molecular element critical for learning and memory and association with certain proteins containing the SH3 domain such as src tyrosyl kinase family, the inhibition of which also leads to memory defects.

From a biochemical point of view, PDE4A5 provides a tool by which cAMP functioning within the synaptic area can be investigated. Cyclic AMP is a multifunctional second messenger and its production from adenylate cyclase within the neuron is linked with the opening of chloride ion channels, protein kinase (PK) activation and gene transcription (eg. CREB phosphorylation). Therefore, if cAMP levels are reduced then either the PDE4A5 protein reduces cAMP production by binding directly to the adenylate cyclase enzyme (AC) and eliciting conformational changes that prevent the enzyme from working, or it increases the level at which the cAMP formed by a normal acting AC is degraded. Since the PDE4A5 protein is described as a phosphodiesterase (breakdown of cAMP to AMP) then the latter seems to be how this protein functions in the normal cell. Therefore, it can be said that if the level of this common second messenger is reduced on PDE4A5 binding then the protein is likely to play a role in the ´switching off` mechanisms of the neuronal cell after stimulation (e.g. in hyperpolarization for example). The question is which natural cAMP dependent neuronal functions is PDE4A5 likely to have an effect on?

Havekes and colleagues found in their study that PDE4A5 binding was perinucleur, dendritic and inter-compartmentalised. Therefore, the known role of cAMP in chloride ion channel functioning can be ruled out as a location for the PDE4A5 effect since chloride ion channels are situated on the cell membrane surface. Under normal activation, cAMP would increase the opening of the chloride channels to aid hyperpolarization and this is linked with GABA binding. The hippocampal CA3 area contains GABA interneurons and increased GABA receptor binding in this area is linked with fear memory which correlates with the observation that increased PDE4A5 expression results in anxiety and emotional memory changes. Hence, an increase in GABA binding leading to increased long-term depression of the relevant interneurons may mean that hyperexcitability of the CA1 area may occur. This would be consistent with the cognitive effects observed. However, since a membrane effect is not attributed to the PDE4A5 action then an influence on chloride channel opening by affecting cAMP level can probably be ruled out and it can be assumed that the cognitive effects observed with increased PDE4 expression come from other factors.

It is more likely that the PDE4A5 protein instigates its effect on cognition by influencing the performance of the various protein kinases involved in neuronal functioning. Cyclic AMP activates protein kinases by altering the enzyme`s quartenary structure and therefore, reduced cAMP would reduce the level of functioning protein kinases within the cell. For example, in the presence of PDE4A5 binding there would possibly be reduced activation of calcium calmodulin protein kinase which leads to decreased phosphorylation of the synapsin proteins and synaptogamin synaptic vesicular proteins. This would result in for example less vesicular transport in the synapse leading to less release and degradation of neurotransmitters and lower receptor trafficking. Therefore, it may be suggested that this could be a pathway by which the ´switching off ` of the synapse post-stimulation might occur.

A similar rationale could also be applied to the actin binding protein, girdin, which is one of the proteins responsible for the neuron`s actin-based cytoskeleton. This protein interacts with Src tyrosyl kinase which acts on the NR2B subunit of the NMDA receptor in the hippocampus. This type of glutamate receptor is linked to normal neuronal functioning after stimulation and long-term potentiation of the area. Therefore, a reduction in cAMP level induced by the PDE4A5 binding could lead to an effect on the actin cytoskeleton of the pre- and post-synaptic areas resulting in less vesicular transport and trafficking of proteins, receptors etc. as well as an effect on the very receptors that are linked with long-term potentiation and memory. A more direct influence of cAMP on the NMDA receptor also comes from its effect on post-synaptic protein kinase A (PKA) activation. This enzyme would normally phosphorylate a particular residue of the GluN2 subunit of the NMDA receptor and this subunit has been found to be critical for correct synaptic targeting of the receptor. Therefore, a reduced level of cellular cAMP would mean less protein kinase A phosphorylation of the subunit and lower NMDA receptor numbers at the cell membrane. It is likely that in this case long-term potentiation would not occur and this would result in lower or non-existent memory formation. Therefore, PDE4A5 binding would reduce neuronal functioning after stimulation and this effect would mean binding is located in the neuronal dendrites.

Havekes and colleagues also found that PDE4A5 binding was located in the perinuclear region of the cell and this could be explained by decreased PKA functioning, too. In this case, the PKA phosphorylates the cAMP response binding protein (CREB) which binds to the DNA. Activation of this protein results in changes in gene transcription, eg. nuclear factors such as Bdnf. There is evidence of CREB involvement in PDE4A5 binding and hence, reduced cAMP levels could ultimately affect the amount of gene transcription occurring at the nuclear level.

Therefore, it appears that PDE4A5 could be involved in the ´switching off` of the active neuronal cell and it is likely that this effect is brought about by the reduced cAMP level influencing protein kinase activity at both the perinuclear and dendritic locations. Since there is less known about the mechanisms involved in ´rebalancing` the cells after firing in readiness for the next firing stimulus, identification of elements such as PDE4A5 helps to elucidate the process. This is important because it may be possible in the case of cognitive disorders which involve the hyperexcitability of areas that manipulation of such an element can induce the cell to ´switch` off  thus returning the area to its correct firing level and restoring appropriate cognitive function.

Since we`re talking about the topic…………………..

……if PDE4A5 function is linked with protein kinase activity then can we assume that use of a PK inhibitor such as staurosporine would have no additional effect on cell functioning and cAMP level if PDE4A5 gene expression was increased?

…….can we assume that the administration of entomidate which effects GABA receptor binding and hyperpolarization through chloride ion channel opening confirms the non-involvement of cAMP at chloride ion channels in the presence of increased PDE4A5 expression?

….is it possible that investigation of neuronal activity of schizophrenic sufferers who are reported to have disrupted N terminal binding of PDE4A5 would demonstrate unusual protein kinase functioning and that further investigation of the areas and particular protein kinases would elucidate exactly where the PDE4A5 works?


Posted in hippocampus, long-term memory, PDE4A5, Uncategorized | Tagged , ,

visual imagery deficiency

Posted comment on ´Blind in the Mind` by D Grinnell and published in New Scientist 23rd April 2016 3070 p34


The author of the article, D. Grinnell, has never been able to carry out mental imagery, but claims he has no problems with tasks that are usually aided by it, eg. navigation and people recognition. He appears not to be unique with 2-3% of people also lacking the capability according to a study using the test, Vividness of Visual Imagery Questionnaire, where various scenes have to be imagined and the clarity of the mental picture rated. The idea that some people are not capable of forming mental images is not new with Sir Francis Galton reporting it as early as 1880. He asked his study participants to imagine things on a breakfast table and found that some were unable to carry out the required task.

Grinnell in his article quotes a study by Zeman and colleagues whose subject, MX, was a 65 year old building surveyor who reported losing the capability to form mental images after heart surgery. MRI scans showed that when pictures of recognizable things were shown to MX, firing patterns were produced in visual areas towards the back of brain and these patterns were both expected and distinctive. Attempts by MX to imagine the same pictures however, produced no such firing patterns. Although the mental images could not be formed, it was found that MX could still give relevant information about the objects such as the number of windows in a particular house. The condition of lack of mental imagery was named as aphantasia by Zeman and his colleagues who also found in their study a further 21 people suffering from it, all of whom appeared to have had it from birth. Zeman concluded that a person does not have to see something to ´live it`, they just needed to be aware of it and Grinnell, himself a sufferer, in his article agrees with this view.

In his article, Grinnell went on to describe the psychological hypotheses relating to visual imagery. The cognitive neuroscientist, Kosslyn, described visual imagery as depictive/ quasipictorial representations and that spatial organization of brain activity resembles the object imagined. Kosslyn explained visual imagery from a physiological perspective by saying that visual imagery is not constructed in a single way in the brain because the separate visual circuits for shape, colour and spatial relationships are not all switched off in aphantasia. Grinnell found on questioning aphantasics that visual imagery was replaced by imaginary drawing and therefore, there was control of physical movements such as finger movements. This hypothesis was supported by Zeman and team. In their tests on MX, Zeman´s group also found that MX`s spatial rotation skills were faster than average. Spatial rotation requires the subject to say which images are the same as the guide image, only rotated and hence, the greater the rotation, the longer the time required to work out if there is a match because of the need for mental image manipulation. To explain their observations, Zeman believes that everyone has visual capabilities and people with mental imagery rely on this visual information whereas aphantasics are given other information or representations. This is supported by evidence that aphantasics dream in pictures and some see flashes of imagery under certain conditions, eg. before they fall asleep. Therefore, aphantasics may not be able to consciously control their mental pictures, but the capability to carry them out may not itself have vanished.

Grinnell continues in his article by citing Zeman`s hypothesis of the parallels between aphantasia and blindsight. In blindsight, there is visual information, but no conscious awareness of it. De Vito and Bartolomeo extended this by saying that aphantasics still have the capability to imagine, but just believe they cannot thus supporting Zeman`s hypothesis. It was proposed that extreme stress could induce a change to aphantasia and evidence from a study of Monsieur X in 1883, who after a period of intense anxiety developed aphantasia, was given. However, this could not be said to apply to other well-known cases including MX whose aphantasia was caused by brain injury and by Grinnell himself who was born with the condition.

Grinnell in his article also discussed whether aphantasia was reversible. Pearson in Australia looked at whether mental imagery could be reset. In 2008, a test was developed that objectively measured peoples` mental imagery capability. Subjects` fields of vision were divided so that they saw a set of horizontal red stripes through one eye and a set of vertical green stripes through the other. Normally, one set is perceived first, but if flash cards are displayed quickly several times then for most people the probability of perceiving that particular colour the first time increased. This was explained by the formation of the picture in the subject`s mind`s eye which led to priming of the participant to see it again. However, studies on aphantasics gave inconsistent results. Pearson then coached those participants that demonstrated the unconscious mind`s eye by saying that they had to try visualizing either the green or red striped pattern for a few seconds every day for 5 days. The process was then repeated in the laboratory and the participants were asked to rate the strength of the image. Immediately afterwards, Pearson flashed the red pattern in one eye and the green in the other and measured whether people had perception bias. In some cases, the objective rating was found to remain constant, but the subjective rating had improved suggesting that the training had helped people to begin to access the previously subconscious mind`s eye. Grinnell himself found shapeless lights flashed into his mind, but decided not to continue with the training.

Grinnell`s article concluded with him saying that aphantasia had given him an unique way of seeing the world which he did not want to relinguish. Others also stressed the importance of aphantasia and the unique skills required for people lacking mental imagery to carry out cognitive processes. This capability could be used to determine alternative ways of information processing and thinking which could aid those suffering from neurological disorders.


What makes this article interesting is that aphantasia appears to go against what we think is happening with the neurochemical mechanisms in the cases of certain cognitive capabilities such as complex decision-making or navigation. In such examples we believe that imagined visual information built in the mind, albeit based on ´real` information whether in real-time or from memories, helps the brain to carry out the required tasks. However, it is clear that there are certain people, the aphantasics, who have no visual imagery, possess neurochemical mechanisms that are obviously different to others, but are still able to perform normal cognitive tasks such as decision-making. Therefore, there is a need to investigate the neurochemical mechanisms of this minority of people. For  97-98% of people capable of seeing, visual information plays an important role in memories, thinking and informational processing and to carry out these functions there are various visual systems and mechanisms employed including: the physiological visual neuronal pathway from input in the eye to the higher cortical areas; visual short term memory where visual information is held as an electrical firing pattern for a very short period of time (less than 10 seconds) and the person may be conscious of the experience or unconscious; visual long term memory where neuronal cell assemblies are formed from the short term visual firing patterns and the information is stored as memories to be consciously or unconsciously recalled at a later date; the visual buffer which is part of the Baddeley and Hitch working memory model and is the processing ´work space` of the cognitive brain; and finally, and obviously not for everyone, visual imagery which is defined as where there is a visual memory representation when the stimulus is not actually being viewed, ie. ´seeing with the mind`s eye`.

The various visual systems and mechanisms are well-researched and new knowledge is continually being added and from this collection of knowledge we know that visual representations are part of decision-making for example, thought (Aristotle`s view that they are the ´medium of thought`), problem-solving, prospective memory planning and memory techniques such as method of loci. The visual images are formed in the V1 with involvement of the V2, slightly elongated fields and have close similarities to perception even though they are lower in detail than those spawned from ´real` stimuli. In the majority of people the visual representations formed in the V1 are likely to follow Kosslyn`s perceptual anticipation theory with the images being quasipictorial representations.

However, aphantasics show that visual imagery is not necessary in their case for the same cognitive capabilities to be demonstrated as those having this capability and therefore, their neurochemical mechanisms are likely to be different to those of the majority. In fact, they are examples of support for the Pylyshyn`s propositional theory for visual imagery where the visual image is not dependent on depictive/quasi pictorial representations, but a tacit knowledge of how the subject would ´look` in the situation. This could possibly be explained by considering the information at V1 not as solely visual, but as instead electrical representations that are capable of being interpreted into a visual image if required or more likely a multi-sensory representation with input included from the other sensory systems as well. Such a representation would override the need for visual dominance in cognitive functioning and allow aphantasics to process information in the absence of conscious visual imagery, but using the unconscious information from visual pathways and other senses. Therefore, aphantasics have a lack of awareness that visual information is being used, only that an electrical representation is formed. This view is supported by observations that: the visual imagery capability is still present in aphantasics since studies have shown that people can be trained to some degree to use it; conscious visual information is not always required since in others there are plenty of examples of unconscious visual processing such as moving before knowing why you have to move; and the cases of blind sight and visual working memory where visual information is being processed without conscious awareness.

What makes this topic interesting is that aphantasics provide a relatively large subject group in experimental terms that are likely not to suffer from mental health issues, or brain injuries and who could allow the conditions and mechanisms of mental tasks to be explored to the full, eg. decision-making or prospective memory. Not only could objective research methods be employed, but introspection could be considered more repeatable and reliable. Studies using techniques such as imaging, temporary incapacitation of certain brain areas with tDCS for example or local anaesthetics could explore the mechanisms involved in the cognitive processing of aphantasics and perhaps shed light on new approaches to, for example, the treatment of mental disorders affected by deficient information processing. In his article, Grinnell refuses to continue with the training to overcome his lack of visual imagery preferring his uniqueness and he may be helping the rest of us by doing so!

Since we`re talking about the topic………………….

……since visual imagery is presumed to be required for matching objects that have been rotated to some degree, would accurate imaging studies show the mechanisms that aphantasics employ in the carrying out this skill?

……if training with flash cards changes the performance of aphantasics to ´seeing images`, could training using images of clocks aid prospective memory performance in those suffering from disorders where information binding is problematic?

Posted in Uncategorized, visual imagery | Tagged

frequency selective control of cortical networks by thalamus using optogenetics

Posted comment on ´Frequency-selective control of cortical and subcortical networks by central thalamus` by  J. Liu, H.J. Lee, A.J. Weitz, Z. Fang, P. Lin, M. Choy, R. Fisher, V. Pinskiy, A. Tolpygo, P. Mitra, N. Schiff and J.H. Lee published in eLife 2015;4:e09215 (doi.org/10.7554/eLife.09215)


The authors of this paper explored the network connections of the central thalamus which is known to play a role in arousal and organized behaviour. They used optogenetics (20 s periods of light stimulation every minute for 6 min at 10, 40, or 100 Hz) with fMRI to form the ofMRI technique which provided whole brain spatial and temporal information.  A stereotactic injection was given to the subject in the right CL and PC intralaminar nuclei of central thalamus with the adeno-associated virus carrying channelrhodopsin-2 (ChR2) and the fluorescent reporter protein EYFP under the control of the CaMKIIa promoter. This promoter was used since it is expressed primarily in excitatory neurons which in the thalamus are mostly relay cells. Liu and colleagues found that nearly 34% of cells were EYFP-positive, co-expressing CaMKIIa which showed that the technique was highly selective for excitatory neurons and hence ideal for neuronal stimulation experiments. Targeted stimulation of the intralaminar nuclei area was achieved by MR-validated stereotactic fiber placement and using a small volume of excited tissue. Electrophysiology and video EEG monitoring was also used to investigate the network connections. Ex vivo fluorescence microscopy images of ChR2-EYFP expression were also carried out.

Liu and colleagues found in their experiments that EYFP-expressing axons could be seen throughout the forebrain, including areas such as the frontal cortex and striatum with the medial prefrontal, lateral prefrontal, cingulate, motor, and sensory cortices all receiving strong projections from the thalamus. Input was found to be highly convergent at the superficial layers, with moderate but weaker projections also present in the middle layers. Furthermore, projections were significantly restricted to the hemisphere ipsilateral to the virus injection for both the cortex and striatum.

The authors also found using the ofMRI technique at all 3 frequencies strong positive blood-oxygen-level-dependent (BOLD) signals at the site of stimulation that was highly synchronized to light delivery, increased upon optical activation, and gradually returned to baseline following the end of stimulation. Local neuronal firing was also observed. A much larger volume of brain tissue was activated by stimulation at 40Hz and 100 Hz compared to 10 Hz as was the frontocortical areas and striatum in particular. The difference in activation volume between the low 10 Hz stimulation and the higher 40 or 100 Hz stimulation frequencies was significant for the thalamus, striatum, and medial prefrontal, lateral prefrontal, cingulate, motor, and sensory cortical areas. Striatal activity was found to be primarily localized to the dorsal sector, with negligible activity occurring in the ventral region and BOLD activation was generally restricted to the ipsilateral hemisphere, although activation volumes in the contralateral striatum, lateral prefrontal cortex, motor cortex, and sensory cortex were all significantly greater during 100 Hz stimulation compared to the low 10 Hz stimulation. The rapid 40 and 100HZ stimulations of the central thalamus causing the widespread activation of the forebrain caused a state of arousal in the sleeping rats and the increase in neuronal firing rate observed during the 100 Hz stimulation was generally maintained throughout the 20 s stimulation period.

With the slower 10Hz stimulation, Liu and colleagues found that even though the excitatory neurons had been targeted for activation the somatosensory cortex exhibited a strong negative BOLD signal during 10 Hz stimulation which suggested that baseline activity had been suppressed. This was supported by the results of the ofMRI technique which showed that 10 Hz stimulation had decreased the neuronal firing rate between pre-stimulation and stimulation period and this decrease occurred mainly between 5 to 15 s after initiation of the stimulation. Spiking events which occurred during this inhibition had a non-uniform distribution over time suggesting that only sometimes did the glutaminergic thalamocortical input generate action potentials. The resulting lower activation of the forebrain and inhibition of the sensory cortex led to seizure-like unconsciousness of the test subject.

Using the ofMRI technique, Liu and colleagues could identify a group of inhibitory neurons in the central thalamus in the zona incerta (ZI) region which sends direct GABAergic projections to the somatosensory thalamic nuclei and sensory cortex and whose activity is linked to whisker stimulation. The authors found that the majority of the ZI cells exhibited increases in firing rate during the central thalamus stimulation at 10Hz and 40Hz. Spindle like oscillations (SLOs) were evoked at the lower 10Hz stimulation, but not at 40Hz and these oscillations exhibited an inter-event interval centered around 6.6 s similar to those observed in the thalamus during the onset of sleep. The suppressed ZI firing during the 10Hz stimulation was found to lead to a reduction of evoked cortical inhibition. Simultaneous EEG recordings in the frontal cortex revealed strong spike-wave modulation during the 10 Hz stimulation associated with the loss of consciousness and lower amplitude, fast oscillations during 40 Hz stimulation associated with aroused brain states.

Liu and colleagues investigated if the evoked activity in ZI plays a causal role in driving the frequency-dependent inhibition of the somatosensory cortex. They injected the inhibitory opsin halorhodopsin (eNpHR) fused to the mCherry fluorescent marker and controlled by the pan-neuronal hSyn promoter into the ZI of four animals expressing ChR2-EYFP in the central thalamus. The light stimulation at 10Hz of halorhodopsin was found to be successful in suppressing ZI activity and this had a net inhibitory effect on somatosensory cortex activity. The authors suggested that this was brought about by hyperpolarization of the neuronal cells in this area.

The results found with ofMRI were supported by the simultaneous video and EEG recordings. During the 10 Hz stimulation, the majority of animals exhibited behavior indicative of an absence seizure, including freezing and behavioral arrest throughout stimulation leading to sleep onset. The most common EEG response was a shift to slow spike-wave discharges indicative of a loss of consciousness. The higher 40 and 100 Hz stimulations led to the awake state and an EEG pattern associated with cortical activation and desynchronization.

Therefore, the authors concluded that the awake or unconscious (or sleep) state is promoted by the ZI area of the central thalamus and how fast these neurons are stimulated. Differences in time could reflect the short-term plasticity of the thalamocortical pathway which has frequency-dependent properties. Their experiments show that neuronal cells in a single population can have different firing patterns and promote different effects on connecting areas depending on the temporal code of their stimulation. Since there are GABAergic projections from the ZI to central thalamus, activity in ZI may also limit forebrain activation through incertal-thalamic feedback. Therefore, the hypothesized feedforward and feedback inhibition via ZI both suggest a direct projection from central thalamus to ZI, which the fluorescence imaging data supported. However, there is no thalamic input specifically from the intralaminar nuclei to ZI and therefore arousal regulation is driven by the central thalamus which has a causal and frequency-dependent influence on ZI. Suppression of the ZI activity modulates the activity of the overall brain which is susceptible to thalamus stimulation eg. inhibitory signals from the ZI lead to frequency-dependent depression of cortical activity. This type of information can be important in the treatment of traumatic brain injury and the minimization of cognitive defects.


What makes this paper interesting is the use of the newly popular technique of optogenetics to further investigate a brain area with relation to a well-known function. It has been known for a long time that the central thalamus is an important area relating to arousal/alertness and sleep/wakefulness and that damage to this area can be lead to not only excessive sleeping and coma, but also cognitive problems such as loss of memory. The study described here in this Blog post uses optogenetics to investigate the arousal and sleep function of the thalamus further. It can be seen that the central thalamus and intraluminar nuclei when stimulated at low frequencies leads to the subject losing consciousness, limited forebrain functioning, strong inhibition of the somatosensory cortex and  EEG spindle bursts. Alternatively, high frequency stimulation leads to arousal of the subject, attention and goal directed behaviour and is supported by desynchronized EEG cortical signals.

Using optogenetics with its high sensitivity to spatial and temporal changes, these different effects can be attributed to activity in a specific thalamus region, that of the zona incerta (ZI). This is a grey matter area located in the subthalamus under the thalamus and gates sensory input and synchronized cortical and subcortical brain rhythms. It is known that this area has a wide variety of cells all merging areas into one another and is divided into sectors eg. rostral, dorsal, ventral (known to be GABergic cells) and caudal known as the ´motor sector` and an area bringing research attention because of targeting by tDCS in sufferers of Parkinson`s disease.

ZI is also known to have numerous connections some outgoing (eg. to cerebral cortex, hypothalamus), others incoming (eg. cingulate cortex, frontal lobe, parietal lobe, cerebellum, raphe nuclei, thalamic reticular nucleus, super colliculus, the last three being cholinergic) and some bidirectional such as the thalamus (eg. intraluminar and central lateral nucleus) substantia nigra (linked to DOPA and Parkinson`s disease) and globus pallidus (linked to reward). The capability of the area appears to be linked to the frequency at which it and the thalamus are stimulated. The stimulation either removes the inhibition placed upon the area (high frequency) or activates it (low frequency).  Sensory suppression means hyperpolarization of thalamus leading to GABAergic IPSP and depression in the ZI area. Sensory activation means likely glutaminergic depolarization of the thalamus leading to EPSP of the ZI. Hence, depression of ZI is inhibited by the depolarization of the thalamus. Therefore, the optogenetics study of Liu and colleagues shows that the frequency of stimulation has a wide-ranging neuronal firing affect. Similar to work on the medial leminiscus tract and the thalamus, frequency of stimulation changes subsequent firing such as short EPSP leads to longer IPSP (Castro-Alamancos). Further investigation of the firing within smaller frequency ranges is likely to reiterate the results of Bartho et al. who used anaesthetized rats. They showed that slow cortical 1-3HZ waves become synchronized to depth-negative phasing of cortical waves to a degree comparable to thalamocortical neurons; paroxysomal high voltage spindles display highly rhythmic activity in tight synchrony with cortical oscillations; and 5-9HZ oscillations respond with a change in interspike interval distribution. Hence, the optogenetics technique can be used to further investigate the neural networks existing in the brain and the effect on firing of specific frequency stimulation.

However, herein lies some problems with optogenetics. Is this technique only repeating, albeit more accurately, studies that were carried out many years in the past?  We may be able to pinpoint areas more accurately and say where and with what these areas are networking, but does that add to previous knowledge to sufficiently answer the questions about how memory and consciousness are formed for example? Or how neurodegenerative diseases start? Optogenetics is expensive, there are small sample numbers and the technique has an element of risk with human subjects. Plus it requires cell alterations (the neurons have to express the gene encoding the light sensitive ion channel) so can we guarantee that what we are seeing is actually real and not the result of this insertion? The benefit of this technique could be in cases where it is linked with other techniques such as cell targeting of chemotherapy drugs or in cases like Parkinson`s disease where we can override the effects of limited DOPA in one area and consequential reduced firing by stimulating with light the next area in the motor system. Another benefit of the technique could be in cases where we can compare the molecular complexity of mechanisms investigated by other means for areas lit up due to firing from the targeted area. It is clear that the technique is here to stay and can offer new experimental avenues to explore, but the talked about panacea for human mental disorders is in my opinion not yet proven.

Since we`re talking about the topic……………………….

………………..if Alzheimer`s disease is linked to hyperexcitability of the hippocampus, could optogenetics with illumination at intervals be used to suppress activation in this area and hence, reduce the build-up of beta amyloid?

………………could the use of gold nanoparticles attached to specific antibodies as suggested by Bezanilla instead of gene therapy be used to study other membrane molecules where the transport of electrons is a part of their function and not just neurons?

Posted in neuronal firing, optogenetics, thalamus, Uncategorized | Tagged , ,

iron levels and memory performance

Posted comment on ´Iron Level and Myelin Content in the Ventral Striatum Predict Memory Performance in the Aging Brain` by T.K. Steiger, N. Weiskopf, and N. Bunzeck published in The Journal of Neuroscience, 23rd March 2016, 36(12): p.3552


Steiger, Weiskopf and Bunzeck`s paper looks at the relationship between iron accumulation, the degeneration of neuronal myelin sheaths and brain memory performance in the elderly. They examined the performance at the verbal learning memory test (VLMT) against the degree of myelination and iron accumulation for a group of 17 participants aged 18-32 against a group of 31 participants aged 55-79. Grey matter volume was also measured using voxel-based morphometry (VBM) and MT maps were segmented into grey matter (GM), white matter (WM) and CSF. To test for differences in R2* and MT parameters, a voxel-based quantification (VBQ) analysis was used and statistical analysis was applied to all results. Brain memory performance was measured using VLMT where a list of 15 non-related items (List A) was learnt and immediately recalled five times to give the parameter ´VLMT total learning`. This process was repeated using an alternative word list (List B) which was immediately recalled once. The participants were then asked to recall List A and then again after a delay of 30 mins. The difference between the first and second recall session was designated ´VLMT consolidation`. Cued recall was then performed using words taken from both List A and List B plus additional similar words. Participants were required to identify components of List A and these items were termed ´VLMT recognition`. Regression analyses were performed on the relationship between GM, MT, R2* maps and VLMT performance.

Steiger, Weiskopf and Bunzeck found in their investigation that there was a decrease in grey matter volume in the elderly brain relative to the young. Bilateral decreases were observed in the putamen, orbitofrontal cortex (OFC), precentral and postcentral gyri, supplementary motor area, left supramarginal gyrus, right occipital cortex, right superior temporal gyrus, the right medial frontal gyrus and right inferior parietal gyrus. A decrease in myelin was also observed in the elderly relative to the young as shown by VBQ on MT maps and this was registered for areas within the left hippocampus, right thalamus, caudate, cerebellum, post central and precentral gyri, right colliculi superior, left occipital cortex, and widespread WM tracts.

The authors also found using VBQ on R2* maps an increase in iron levels in the elderly brains and this was observed within widespread brain regions including the basal ganglia, bilaterally in the putamen, pallidum, caudate, ventral striatum, and partly within the occipital cortex. A further investigation of the basal ganglia of elderly participants using the mean R2* and MT found negative correlation between myelin and iron in the area of the ventral striatum in the elderly. However, direct comparison between both correlations for elderly and young participants did not give a significant result and this was explained by the authors as possibly due to the small sample size.

Both markers of iron and myelin (and the ratio) were found by the authors to predict VLMT performance for the elderly participants. The VLMT scores of the elderly were used as covariates in whole brain linear regression models on GM, R2* and MT maps. The authors found within the ventral striatum a positive correlation between VLMT and MT, but negative VLMT and R2*. The VLMT performance predicted by the ratio MT/R2* in the ventral striatum showed increased iron levels in the elderly participants. Steiger, Weiskopf and Bunzeck also reported significant effects within the vicinity of the corpus callosum with learning correlating to MT white and grey matter. However, whole brain regression analysis on grey matter volume and VLMT scores was found not to be significant.

From their results the authors of this paper concluded that there are age related decreases in grey matter volume and this finding was supported by reports from other researchers who stated that the grey matter volume decrease is due to loss of neurons, changes in synaptic density and /or axonal or dendritic arborization. Steiger, Weiskopf and Bunzeck concluded that decreases in myelin in the elderly as seen in lower levels of white matter tracts and in subcortical regions indicates less macromolecular content (mainly myelin) and probably demonstrates demyelination and dysfunctional re-myelination in the aging brain. This provides an understandable link to VLMT performance since myelin is a factor in the speed of neuronal signal conduction and interconnectivity between brain areas important for learning. Also, learning induces myelination linked to oligodendrocytic function, which has been found to decrease with age. The decreased myelin level could also be due to damaged oligodendrocytes releasing iron into the surroundings. The authors found increased iron in the elderly brain mainly in the basal ganglia, a reason for which, although unclear, was suggested by some that it is triggered by an attempt by the cell to maintain a declining system through increasing metabolic processes. The rise in iron accumulation was found to be region specific and ventral striatum iron accumulation was found to be linked to demyelination and impairments in declarative memory in the elderly. The authors explained this by citing the role of the ventral striatum in encoding novel information into long-term memory. Therefore, any change in myelin brings about a change in the hippocampal learning mechanism. On investigation of the whole brain, iron and myelin within the basal ganglia was found to account for individual VLMT performance and not the grey matter volume.

In order to provide an explanation for the link between iron accumulation, Steiger, Weiskopf and Bunzeck discussed an association between iron levels and the neurotransmitter, dopamine. In the healthy brain there is a homeostatic balance between dopamine and iron, but this does not exist when the iron levels are high. Therefore, the negative correlation observed in the ventral striatum in these experiments was suggested as indicating that decreased dopamine levels led to decreased memory performance. This hypothesis was supported by experiments involving iron chelation which was shown to reverse any memory impairments.

Therefore, Steiger, Weiskopf and Bunzeck demonstrated in their paper that iron accumulation and myelin reduction seen in elderly brains can lead to observed cognitive impairments measured by the verbal memory learning test.


This article is interesting because it tenuously links a dietary mineral, iron, which should be part of our normal nutritional intake to neurodegenerative disease. It appears that iron accumulates in the brain naturally with age, but can also occur in some neurodegenerative diseases, which are linked with impaired memory and other cognitive skills. Hence, if this tenuous link is correct then there must be a link between iron and the physiology and mechanisms associated with brain memory. Therefore, the question has to be asked where does the mineral iron fit in with the brain memory hypotheses for neurotransmission and cognition?  Immediately, obvious connections with brain neuronal efficiency come to mind, for example: the role of iron in myelin production and oligodendrocytes functioning with myelin giving the neurons signal transmission protection; the role of iron in the synthesis of cholesterol, hence affording the neuronal cells with membrane fluidity essential in efficient and correct neuronal functioning; the role of iron in the synthesis of the neurotransmitters, hence providing the instigators of the firing from cell to cell. And it does not stop there because there are less obvious roles of iron in the normal ´housekeeping` carried out in living cells such as energy metabolism (eg. respiratory chain, citric acid cycle associations) and biosynthesis of amino acids and nucleotides for example.

The wide range of roles played by the mineral iron is indicative of its chemical ´flexibility` which gives it biochemical advantages. It allows energy state changes of the molecules in which it is a part due to its inherent capability of being in either an oxidized, or reduced state. Electron transfer whether donation or acceptance can lead to structural conformational changes of the molecules that include the iron ion in their structure and these changes in conformation can be part of the functioning mechanisms of that particular molecule. Heme groups and iron-sulphur clusters are good examples of this. Owing to this electron transfer capability, it is not good for a cell to have iron ions free in the cytoplasm and hence, iron is ´wrapped up` in the form of ferritin (storage), transferrin (serum) and transferrin receptor (entry to cells). The balance of these forms is important and this has been shown with reports of reactive oxygen species (ROS) production and modification of lipids, proteins, carbohydrates, DNA etc. when the balance is disturbed.

Therefore, it is clear that a system relying on signaling transfer such as that found in neurotransmission can be influenced by iron concentrations and this is supported by evidence that memory and cognitive skills can be affected by iron availability. Free iron accumulation has been reported in neurodegenerative diseases such as Alzheimer`s disease. Iron chelation has been found to lead to decreased symptoms, increased memory and inhibited beta-amyloid accumulation, a major contributor to Alzheimer pathology and symptoms. Therefore, is iron accumulation a cause or consequence of Alzheimer`s disease? This is important because if it is a cause then therapy based on modulating iron availability could lead to a reduction in occurrences of the disease.

A look at where iron fits in with the neurotransmission mechanism shows that iron probably plays a role (in addition to those described above) with the beta-amyloid led endocytosis of neurotransmitters into the lysosomal vesicles which forms part of the neuronal cell regeneration after the action potential phase. Disruption of this endocytotic phase by the effects of the dysfunctional amyloid-precursor-protein/beta-amyloid in sufferers of Alzheimer`s disease could explain the observed iron effects. In the hypothetical version of neurotransmission advocated by the author of this blog, ferroportin (the iron transporter for efflux) is attached to the amyloid precursor protein (APP) found in the lipid raft of the presynaptic membrane. PICALM, AP2 and clathrin are all in close proximity. (Other APPs also exist in the neuronal membrane, but are outside the lipid raft and linked to the potassium channel). In normal functioning APP is cleaved by beta-secretase and y-secretase to produce beta-amyloid that is capable of normal conformational changes, and neurotransmitter and metal ion binding. This, hypothetically, leads to the endocytosis of excess neurotransmitter not bound to the post-synaptic membrane by the beta-amyloid aggregate forming vesicular structures from the lipid raft area and transferred within the presynaptic body from cell membrane to endoplasmic reticulum via microtubules and dynein action. The neurotransmitters and membrane components undergo appropriate lysosomal degradation during the transport process and the vesicles are recycled back to the membrane for the next signaling phase. Under normal conditions the conversion of the membrane bound APP to beta-amyloid causes the ferroportin channel to open and reduced iron floods from the cell to be picked up by the oligodendrocytes in the synaptic cleft. This is then used for myelin production, an important mechanism especially in the case of the hippocampus with its high levels of neurogenesis that occurs there during memory formation.

In Alzheimer`s disease it is possible that the unusual cleavage of the membrane bound APP and the formation of the excess beta-amyloid does not produce the membrane conformational changes that leads to the opening of the ferroportin iron transporter resulting in the accumulation of free iron in the cell. Such an accumulation can cause ROS production which is reported in Alzheimer`s disease. Therefore, iron ions are part of the dysfunctions observed at the neuronal level that eventually end in cell death and the peculiar pattern of pathology observed in Alzheimer`s disease. It is also possible that reduced iron ions themselves bind to the abnormal beta amyloid sat on the presynaptic membrane and is part of the dysfunctional endocytotic vesicle formed at the membrane surface. This is supported by the observation that presynaptic iron induces aggregates of inert alpha synuclein and beta-amyloid to form toxic aggregates. Therefore, it is clear that in this case iron is not the cause of Alzheimer`s disease, but a consequence if this hypothesis of neuronal functioning is correct. The limiting factor of the disease appears to be associated more with the formation of excess beta amyloid and dysfunctional APP cleavage.

Iron, however, is not the only metal ion with a role in neurotransmission. Zinc is also an essential mineral that has important biochemical links to efficient neuronal function particularly in the hippocampus where deficiency is associated with causing lethargy and cognitive difficulty for example. A rise in zinc level has been found in Alzheimer sufferers and it is a known blocker of ferroportin, the iron transporting protein whose role in neurotransmission is described above. It is also known that vesicular release and the zinc transporter (Zn-T3) are required for beta amyloid targeting. Therefore, like iron, could zinc be a cause or consequence of the Alzheimer disease?

Zinc is a constituent of many enzymes and in particular the metalloproteases and plays an important role in vesicles and in autophagy (the neuronal endocytosis described before with the breakdown of the neurotransmitters and membranes to be recycled for future synaptic activity). With zinc, the link to neurotransmission is with calcium ions and calcium ion influx which is observed with neuronal excitation. In Alzheimer`s disease, hyperexcitation in the hippocampus leads to massive calcium influx and excess glutamate release. The increase in zinc leads to increased zinc in the lysosomes resulting in membrane disintegration, release of cathepsins and other lysosomal enzymes and increased caspase induced apoptosis. This brings about the neuronal pathology observed in the disease. Again like iron, although zinc deficiency leads to cognitive effects it appears it is not the limiting factor in the causation of Alzheimer`s disease, but a consequence. In this case, hyperexcitation of the neuronal system in this neurodegenerative disease appears to precede the zinc effects.

Therefore, what can we conclude about the role of metal ions in neuronal transmission? We can see that both iron and zinc play a number of specific roles in neuronal signaling and neuronal cell functioning and deficiency can cause abnormal physiological effects that influence the overall functioning of the cell. It is also clear that the causes and physiology of Alzheimer`s disease are complicated with effects observed in the multiple systems, enzymes etc that make up neurotransmission and cognitive functioning, eg. relating to action potential, neurotransmitter synthesis and release, exocytosis and endocytosis, receptor trafficking just to name just a few. Therefore, the likelihood is low that positive changes in the neurotransmission of elderly people with something as simple as iron or zinc administration can cancel out the negative changes seen with Alzheimer pathology leading to retention and improvement of cognitive skills. However, this does not mean that there is not a link between iron and zinc deficiency in the very early stages of Alzheimer`s disease, ie. before the distinctive beta-amyloid accumulation and oligomer pathology is observed. Since the pre-Alzheimer stage develops over many years, who knows what the real instigators are and wouldn`t it be nice if the solution was as easy as administration of zinc or iron! More research is obviously required, but until then maybe everyone should make sure that their daily mineral intake is sufficient.

Since we`re talking about the topic……………………………..

…..are mouse models of Alzheimer`s disease the best models when dietary considerations are being investigated and should it not be that ´elderly` mice are preferentially used for testing for this particular neurodegenerative disease?

……is it that in iron deficient mice, myelin production in the hippocampus is reduced and this can be linked to synchronization problems between this area and others relating to spatial memory and conditioning. In this case, would neuroimaging experiments and brain wave monitoring show the defective connectivity between the hippocampus and other areas linked to memory for example?

Posted in iron, memory recall, neuronal firing, Uncategorized | Tagged , ,

neuroimaging of connectome changes after working memory training

Posted comment on ´Dynamics of the Human Structural Connectome Underlying Working Memory Training` by K. Caeyenberghs, C. Metzler-Baddeley, S. Foley and D.K. Jones published in The Journal of Neuroscience, 6th April 2016, 36(14) p. 4056


Neuroimaging studies of the brain normally involve showing functional areas of the brain and that functioning responding to some change in condition. The work of Caeyenberghs and colleagues is no different except they have found that the results of neuroimaging studies relating to memory capability pre- and post-cognitive training can be dependent on the metrics used. Most of the previous research looking at this topic uses diffusion tensor MRI studies and the metric of fractional anisotropy (FA) looking at white matter and results obtained are inconsistent. In Caeyenberghs and colleagues study 40 healthy participants underwent either an adaptive training program for working memory (Cogmed – 45 sessions; 40 sessions in total with training for verbal and spatial memory) or non-adaptive training. The participants were assessed using MRI neuroimaging and computerized working memory and executive function tests.

The neuroimaging techniques used in the reported study combined well established (although nonspecific) diffusion tensor MRI metrics with both MRI relaxometry-based metrics (an indirect measure of myelin, but corrected for motion and distortion artifacts for example) and metrics derived from advanced models of diffusion tensor studies (gives estimates of axonal density and corrected for distortions induced by the diffusion-weighted gradients and motion of the head for example). Quantitative maps of axonal morphology were constructed using the CHARMED protocol and maps of myelin level using the mcDESPOT protocol. A total of eleven different kinds of networks were generated with a network being defined as a set of nodes denoting anatomical regions and interconnecting edges denoting undirected tractography-reconstructed fiber trajectories interconnecting the nodes. These were weighted using the mcDESPOT protocol. Network areas included the inferior and superior parietal cortex, supramarginal gyrus, caudal and rostral middle dorsolateral prefrontal cortex, superior frontal cortex, inferior ventrolateral prefrontal cortex, insula and anterior cingulate cortices and the subcortical regions of the basal ganglia i.e. caudate, putamen, globus pallidum and thalamus. The volumes of grey matter of 30 regions of interest were used to construct structural correlation networks. FreeSurfer was used for cortical reconstruction and volumetric segmentation reconstruction of the brain’s surface to compute cortical thickness. The authors then performed graph and theoretical analyses to obtain their imaging results.

In order to ascertain the possible relationship between various cognitive skills in their behavioural testing methods, Caeyenberghs and colleagues ran prior to training test exploratory principal component analysis and this showed three significant behavioral components that together amounted to 59% of the total variance. The first component, complex span working memory, accounted for 34% and related to all of the tasks in which information had to be actively maintained in short-term memory, eg. the automated symmetry span task, the spatial span task, and the odd one out task. The second component accounting for 13% of the variance was associated with tasks involving a verbal component and included the double trouble task, digit span tasks, and the grammatical reasoning task. Tasks requiring general reasoning including the Hampshire tree task and the self-ordered spatial span, related to the third component and this accounted for 12% of the variance. Statistical analyses of the results were carried out by combining the scores.

Caeyenberghs and colleagues found that the Cogmed training program carried out produced positive cognitive changes. The main effect of time (pre and post-training) and group (adaptive and non-adaptive training) produced significant results, but there was no significance between the three cognitive skills (ie. complex working memory, verbal or general reasoning). Both sets of results for time against group and time against cognitive skill were significant and a three-way interaction between factor, group, and time was also found to be significant. Varying levels of performance improvement were obtained with the highest levels for the complex working memory and the verbal component in the post-training session for the adaptive group compared to the non-adaptive group.

Using graph theoretical network analysis of the working memory training effects, the authors found that there were significant changes for the interaction of group against time using the mcDESPOT protocol. This demonstrated that there had been an increase in global efficiency between the nodes of the network in the adaptive group from pre- to post-training. Marginal significant interaction effects were observed for the global efficiency of the graphs weighted by different diffusion-derived parameters, including FA, 1/mean diffusivity (MD), axial diffusivity, tissue volume fraction, 1/radial diffusivity, and a number of streamlines. The parameter that best captured the effect was the relaxation rate. No significant interaction effects were observed for the graph weighted by the total restricted fraction derived from the CHARMED protocol, or in the graph derived from the covariance of gray matter volumes.

Analysis of the neuroimaging results obtained for different regions led to the identification of the nodes responsible for the effects of the working memory training. The authors found that right anterior rostral cingulate gyrus, an area associated with attentional control and mental effort, showed a significant group against time interaction. Using a different method, significant results for a group against time interaction effect was also achieved for the right inferior ventrolateral prefrontal cortex which is associated with attentional orienting processes. These observations were supported by post hoc two-sided t test results.

Caeyenberghs and authors also found that correlation analyses between the changes in global efficiency from prior training periods to training and afterwards and the composite scores of the behavioral parameters showed little direct association between changes in structural network metrics and the improved performance on the cognitive tests. However,  using an exploratory uncorrected threshold of p < 0.05, correlations were observed between the changes in Cogmed tasks performance and changes in global efficiency of the R1-weighted networks. This was associated with better working memory performance on the Cogmed pairing with higher efficiency of information transfer (i.e. more global integration). None of the correlations were significant when the necessary correction for multiple comparisons was carried out.

The scores of global efficiency of the networks constructed with different metrics as ´connection strengths` were found to be highly intercorrelated at the baseline and therefore the authors interpreted this as representing non-independent observations. For example, the global efficiency of the network whose connections strengths were defined by the quantitative relaxation rate R1 (1/T1) derived from the mcDESPOT protocol correlated strongly with global efficiency of the network weighted by the TRF derived from the CHARMED protocol. However, the difference scores in global efficiency of the networks weighted by R1 were found not to correlate significantly with difference scores in the efficiency of the network weighted by the TRF metric or MWF-weighted network. Therefore, Caeyenberghs and authors concluded that although all metrics correlated at the baseline and that the reduction in correlation post-training suggested that the white matter network underwent changes during training, these changes are better detected with R1 than the other metrics tested. Since the different metrics relate to different aspects of the white matter microstructure and because relaxation times T1 and T2 are affected by changes in water, lipid, and protein content, T2 by iron within the oligodendrocytes and MWF by lipid myelin content, the changes observed with training in Caeyenberghs and colleagues study are suggested as being linked to alterations in these cell components.

Therefore, Caeyenberghs and authors showed in their neuroimaging study that changes occur in the structural connectome as a result of adaptive cognitive training. These changes relate to improved performance of working memory and verbal tasks and less so the far-transfer tasks involving general reasoning. The positive performance changes relate to increased global efficiency and there are likely to be white matter changes as shown by relaxation-rated network increased neuroimaging sensitivity. Since the authors discovered that some MRI metrics are not ideal for this type of neuroimaging of these particular cognitive skills (eg. techniques normally used for observing global efficiency changes rely on FA or MD), Caeyenberghs and authors concluded their paper by emphasizing the need for using specific microstructural markers for this type of experimentation.


What makes this paper and others like it interesting is to see the shift over the years of emphasis in neuroscience from laboratory test-tube experiments of long-term physiological changes in single brain area samples to the current fields of neuroimaging of real-time neuronal firing and functional networks. This gives another dimension to cognitive research and this paper takes advantage of these modern real-time techniques to demonstrate how training can affect neuronal cell firing and neural networks. However, every technique has its problems and drawbacks and neural imaging is no exception with the authors here demonstrating that not all metrics are ideal for every cognitive situation and that some can mask effects that would normally be seen or give results different to those obtained by other means. Coupled with these experimental problems are others relating to the fact that living subjects are being used and so with all such experiments of this nature stress, anxiety, timing, previous medication etc. can all affect the imaging results obtained. Even providing enough participants to produce significant results can be a problem.

However, the results obtained from neuroimaging can expand the neuroscientific knowledge front and what was observed here and in other studies of this nature are the functioning changes in neuronal firing and neural networking relating to cognitive training. The authors here found a positive training effect suggested as a result of a positive effect on axonal connectivity within certain areas and between certain areas. Caeyenberghs and authors identified 11 different kinds of networks and found a significant result for the group containing the right anterior rostral cingulate gyrus, an area normally associated with attentional control and mental effort. Using a different method significant results were also obtained for a group with the right inferior ventrolateral prefrontal cortex, an area normally associated with attentional orienting processes and decision-making. These results supported observations by Dreher and Grafman who investigated task switching and dual-task performance using fMRI. They showed that performing two tasks successively or simultaneously activated a common prefrontal-parietal neural network relative to performing each task separately. Performing two tasks simultaneously brought about activation in the rostral anterior cingulate cortex whereas switching between two tasks activated the left lateral prefrontal cortex and the bilateral intra-parietal sulcus region. The results were interpreted as indicating that the rostral anterior cingulate cortex serves to resolve conflicts between stimulus–response associations when performing two tasks simultaneously (attentional control and mental effort), whilst the lateral prefrontal cortex dynamically selects the neural pathways needed to perform a given task during task switching (attentional orienting processes).

The involvement of more brain areas during specific tasks and the effect of training has also been observed by other researchers, too. In August 2015, a post was published on this blog about the Cogmed program and a meta-analysis of the results which had been performed by Spencer-Smith and Klingberg, the authors of the paper. In their paper, they reported an average improvement of 16% after participation in the Cogmed training program independent of participant health status. The repetitive nature of the training program with participants learning through adaptation during the program time span, applying routine, use of memory chunking, improving attention and concentration, and taking note of feedback all possibly providing reasons why the training program led to cognitive improvement. The authors also noted that there was a greater effect on visuospatial memory than verbal suggesting that the improvement could be associated to increased working memory. The improvements obtained were also sustained for 2-8 months after training had finished.

From a neuroscientific point of view, studies have shown that training increases connectivity in frontoparietal and parietal occipital networks (Kunden). We also know that working memory requires acetylcholine and glutamate and the involvement of many brain areas. For example, neurons in the prefrontal cortex are associated with multi-tasking, working memory and attention (Messinger) and visual working memory requires activity in the inferotemporal cortex, V4, medial temporal cortex, prefrontal cortex and globus pallidus, lateral infero parietal cortex (guided eye movements in attention), post-parietal cortex (Koenigs – manipulation of information in working memory), as well as fronto-hippo connectivity (Cordesa-Cruiz). Working memory activity sees changes in prefrontal oscillations with  theta oscillations increasing during temporal order maintenance and alpha oscillations increasing over the posterior parietal and lateral occipital regions for item maintenance (Hsieh) with these alpha oscillations being used to maintain the relevant memory contents rather than suppressing unwanted or no longer relevant memory traces (Manza). In spatial working memory, theta oscillations occur in the medial prefrontal cortex with the ventral hippocampus playing a role in synchronization (O`Neill).

Cogmed training has also been linked to increased attention as well as working memory. In fact, it is likely that Cogmed decreases inattention and increases how much verbal and visuospatial information a subject can temporarily work with (Slezak). What part is effected by training must be ascertained and it could be an increase in efficiency of Posner`s control networks of alerting, orienting or central executive or to the components of orienting (Posner and Petersen) with disengagement (responsibility of the parietal area), shifting and reengagement of focus (responsibility of the What-Where pathway, cortical medial temporal cortex or pulvinar nuclei of thalamus). Another area that exhibits higher efficiency after training is the selection process that retrieves the relevant items from memory (activation in the rostral superior frontal sulcus and posterior cingulate cortex) or the updating process that changes the focus of attention on it (caudal superior frontal sulcus and post parietal cortex). Training could also shift the top-down, bottom-up balance of the control systems (control – stimulus driven and goal directed – Asphland, or top-down and bottom up system – Corbetta and Shulman) with training leading to better control of the top-down attentional systems or by improving working memory biases of attention by initiating the novel parieo-medial-temporal pathway proposed by Soto. The interconnectivity of all these areas can be observed by neuroimaging.

It is also possible that training programs change the balance of task relevant to task irrelevant (or attended to unattended) information, hence improving overall performance this way. It is known that working memory performance is dependent on effectively filtering out irrelevant information through neural suppression (Zonto). The dorsal parietal cortex exhibits influence on top-down attention and ventral parietal cortex on bottom up (Curicella) with prefrontal cortex playing a role (Nieuwenhaus – 5HT and dopamine amplify task relevant information rather than inhibiting distraction), and cingulate cortex (Egner – increase in task relevant information rather than inhibiting task irrelevant). This balance in task relevant information and task irrelevant can be affected by various factors. For example, there appears to be a decreased inhibition of task irrelevant information with age (Blair; Heshier; Redrick); anxiety appears to decrease information with relevance (Weltman), but on a positive note effects can be mitigated by training (Matzell) and computer games (Gofper; Green).

In order to see how the training altered the activation levels at the physiological level, the authors of this paper used different neuroimaging metrics. They found that working memory training brings about effects in water, lipid, and protein content, T2 by iron within the oligodendrocytes and MWF by lipid myelin content and associated the changes observed with training to alterations in these cell components. Other researchers have found that working memory training is associated with variability in white matter (Golestani) and more specifically, increased myelination in white matter neurons in the intraparietal sulcus and anterior body of corpus callosum (Takeichi). Chapman and colleagues also reported changes in blood flow. They found in their MRI studies (arterial spin labeling MRI, functional connectivity, and diffusion tensor imaging) healthy seniors tested pre, mid, and post training (12 weeks) had significant training-related changes in the brain in the resting state. Specifically there were increases in global and regional cerebral blood flow and connectivity particularly in the Default Mode Network and the central executive network and that there was increased white matter integrity in the left brain areas connecting parts of the limbic system in the temporal lobe (eg. hippocampus, amygdala) with parts of the frontal lobes such as the orbitofrontal cortex. They suggested that cognitive training enhanced resting-state neural activity and connectivity and increased the blood flow to certain brain areas, an idea supported by the results given in Caeyenberghs and colleagues paper.

Therefore, it can be summarized that training can induce positive effects on certain types of memory and processing and that these effects are likely to be associated with improved task-relevant information levels, increased attention and increased connectivity between brain areas responsible for attentional systems, visual systems and working memory. However, there is a word of caution in that training of this type does not give an unlimited positive effect, ie. the more you train the more you improve. Even after 8 weeks of training which is a long time and requires a high level of commitment to the training program, the improvement in cognitive performance lies only at less than 25% for healthy individuals. The advantage appears to come when cognitive problems are evident pre-training. It would be interesting to see how the neuroimaging results reported here correlate to training programs undertaken under these conditions.

Since we`re talking about the topic…….

….can we assume that the neuroimaging techniques applied here can be used to demonstrate neural functioning and neural connectivity in other types of memory which have a time element eg. conditioning?  Should connectivity between the areas relating to emotions and decision-making also appear strengthened?

…using this neuroimaging technique would provide interesting comparisons of functioning and neural connectivity if knock-out mice, or transgenic mice, are compared to controls.

….would the consumption of caffeine which increases alertness and consolidation of memory, or other stimulants shortly before each participation in the training program have a noticeable effect on the neuroimaging results as well as performance post-training?

Posted in neuroimaging, neuronal networks, training, Uncategorized, working memory | Tagged , , , | Leave a comment

endogenous rhythms importance

Posted comment on ´In Sync: How to Take Control of Your Many Body Clocks`  by Catherine de Lange and published in New Scientist 16th April 2016 3069 p. 30


De Lange begins her article by describing one example of how the body`s own natural timing system determines behaviour. She describes an example of chrononutrition (which is where a person eats and drinks at the same time day after day) and research carried out on this type of timing by Gerda Pot, a nutrition researcher, whose grandmother exhibited this type of behaviour. Research has shown that the body`s timing system is not a single time piece, but many ´units` with the function of synchronizing the different tissues and organs for optimal performance at different times of the day. Disruption of this synchronization can result in functioning problems and possibly illness.

The idea of body clocks is not new and de Lange describes the history of the findings associated with them. Although the first written report was written over 300 years ago by a French astrophysicist, research interest grew in the 1970s. It was then that researchers identified the suprachiasmiatic nucleus located in the hypothalamus that monitored light and dark signals from the external environment and converted this information into control of physiological processes such as hormone release, body temperature and appetite. Hence, external timing was converted into internal circadian rhythms. In 2014, Hogenesch described a further timing system not limited to the brain but consisting of circadian genes found in 12 mouse organs such as heart, lungs, liver and skin so that functioning of these organs varied over the course of a day.  This system involves the activation of two genes by external influences causing appropriate gene cascades and bursts of cellular activity. Feedback from the products switches off the instigating genes.

Continuing with the topic of circadian rhythms and eating habits de Lange describes in her article work carried out in 2000 which showed that the peripheral gene clocks could be decoupled from the central suprachiasmiatic nucleus pacemaker. Research showed that this could be carried out by simply changing the time at which mice ate relevant to their sleeping patterns. It was found that if mice could only eat during the day, a time they would normally be asleep, then their peripheral clocks shifted by 12 hours. The liver adapted the fastest to this new time schedule taking 3-4 days, but by one week other organs had also adapted, eg. heart, kidney and pancreas. However, it was found that the light activated suprachiasmiatic nucleus clock remained unchanged. Researchers found that the timing shift of the organs had consequences to the health of the animals since those mice whose body clocks had shifted 12 hours were more likely to gain weight and acquire fatty livers. It was also found that if the time windows for eating were restricted then the mice responded similarly to mice on a calorie-controlled diet regardless of the levels of food intake. Therefore, it was concluded that external cues could reset the peripheral clocks leading to desynchronization with the central brain body clock and causing problems with food digestion. This view was confirmed by Pot who looked at the eating habits of 5000 people and found that those who ate at irregular times had a higher increased risk of a metabolic syndrome including diabetes decades later.

Continuing with the topic of endogenous rhythms and eating, De Lange also quotes in her article the work of Garaulet who suggested that weight gain was linked to the circadian clocks of dieters being desynchronised. Garaulet found in 2014 that dieters who had a healthy circadian clock lost more weight. She also found that some people who have a certain variant of peripheral clock gene have difficulty losing weight. Timing of eating also appears to play a role with people who eat their main meal before 3pm losing a quarter more body mass than those eating later. Garaulet`s investigation also showed that when lean, healthy women ate later than usual within one week  their metabolism had slowed causing glucose intolerance and alterations in the daily variation of cortisol. Therefore, weight loss could be linked to not just dietary intake, but also timing of eating and genetics.

In her article, de Lange shows that circadian rhythms and body clocks are important for other physiological functions as well as eating and the digestive system. For example the heart is affected by a cortisol rush at the start of the waking day and the lungs are more efficient during the most active times of the day. Changes in bodily functions linked to time and disruption of synchronization have also been linked to mental diseases, eg. depression, Alzheimer`s illness, Parkinson`s disease and schizophrenia and could explain the higher incidence of metabolic and psychological conditions observed in those people that regularly work night shifts.

De Lange concludes her article with a look at how the knowledge that bodily rhythms can affect physiological functions could be used to a person`s advantage. She gives examples of how personal weight or jet lag may be controlled better by taking into consideration a person`s circadian rhythms, eg. timing of meals, and timing of drug administration. Timing of drug administration was found to affect possibly the drug`s efficacy, eg. nocturnal asthma and a delayed release formula of prednisone, or blood pressure medications increased efficiency if taken before going to bed. Hogenesch shows that the majority of America`s commonly prescribed drugs targets pathways with circadian rhythms and that since most drugs have a short half-life of 6 hours, the timing of administration relative to relevant endogenous rhythms could have a significant impact on drug efficacy. He suggested that treatment of one illness in particular that of cancer, could benefit by consideration of circadian rhythms. Cancerous cells are normally arrhythmic, but drug transport systems maintain the normal physiological circadian rhythms and therefore the timing of the drug therapy could be manipulated to maximize harm to the tumour and minimum harm to the rest of the body.

De Lange concludes her article by expressing her anticipation of the future of chronobiology as more and more evidence appears that lead us to believe that we should be looking at our personal circadian rhythms as a key to helping optimal health and well-being.


Endogenous rhythms are a complicated topic with external Zeitgebers and internal instigators, possible interrelation and rhythmic physiological mechanisms and de Lange describes in her article one such rhythm that of temporal eating patterns and how they can be linked to body weight changes. What makes this article interesting is that even though we are in charge of what we do and this can be independent of actual environmental conditions (eg. the time at which we go to sleep is not related to when the sun actually goes down) our endogenous rhythms keep us functioning to internal ´clocks` that are common to others and evolutionarily conserved. Naturally, such an important influence on physiological processes has been the subject of research over decades not just with human subjects, but with other species too, but the influence is still not fully understood, nor how we can use them to our physical or mental advantage. There have been of course over the decades advancements in the knowledge about the field and there have been numerous studies on the Zeitgebers, interrelativity and desynchronisation. There has also been in-depth research into brain areas like the suprachiasmatic nucleus and the identification of pacemaker clock genes controlling internal RNA and protein synthesis leading to changes in activity and hence, overall cellular functioning. These advancements have led to explanations of how endogenous rhythms may affect the physiological functioning of living organisms.

My own views on the subject like many others support the concept that endogenous rhythms whether involving the whole organism or single cells are the work of two systems divided according to the sources of the relevant signals that govern them. The first system involves the external signals (Zeitgebers) such as light or temperature and involves a ´receiver` of some description. In the case of the former this is the suprachiasmatic nucleus and pineal gland. This type of endogenous rhythm could even include environmental electromagnetic forces linked to lunar cycles and the hypothetical electric signals of Burr`s life fields. It could also include the indirect rhythms such as physical activity for example which are a result of the more direct light-dark environmental rhythm. Just like in the definition of endogenous rhythms these signals are likely themselves to be rhythmic and lead to internal rhythmic expression of biological variables and a temporal organization of these rhythms. In these cases, the biological variables are likely to be hormones, enzymes or even in the case of Burr`s life fields the hypothetical electric signals.

The other system is newer and consists of internal signals, the sources of which are cellular clock genes. These are active systems capable of self-sustained oscillations leading to rhythmic optimal cellular functioning based ultimately on protein synthesis. In 2003, Reinberg and Ashkinazi described their circadian genetic model and a set of essential genes which produced an exact 24 hour cellular rhythm. They showed a set of polygenes that could add or substract 0.8 hrs leading to an assembly of genes creating a 20-28hour circadian rhythm. Cellular function is controlled by the clock genes which affect transcription and hence, this type of rhythmic activity is sensitive to RNA and protein synthesis inhibitors. Reinberg and Ashkinazi also suggested that the clock polygenes are usually repressed when the external Zeitgebers were present. However, others indicate that this may not be the case (a view that I share) with both central and peripheral systems working together either functioning at same time but on different systems, or releasing their hold on their respective systems so that one dominates over the other for a known period of time.

Therefore, knowledge of how the endogenous rhythms are brought about is fairly advanced, but there are still major problems in finding out accurately how the rhythms affect physiological functioning and hence, how we can use them to our physiological and mental advantage. Some of the difficulties with endogenous rhythms are as follows:

  • One of the major problems is inter-individual variability. This makes it difficult to associate single signals to specified effects and to measure changes in these signals and effects with time. Normally, a strong Zeitgeber means there is a strong influence on the system in question, but even this can demonstrate inter-individual variability, and hence suitable systems and effects are difficult to ascertain accurately for a significant number of participants.
  • Another problem is that the biological variable of the endogenous rhythm can change not only with time as expected, but over time, too. The expected effects can be seen as a wave function of regular timing, but when amplitudes also change, this demonstrates inconsistency. Age or drug use can cause such an effect which makes definitive conclusions difficult to make.
  • The ideal research finding would be that one signal causes one rhythm, but research has found that interaction between rhythms and effects is possible and therefore temporal organization is important. Ticher found 7 groups of functioning rhythms such as physiological (37 individual rhythms), cognitive (32), endocrine (27), metabolites (14), organic molecules (25), cellular components (18) and 15 rhythms of enzymatic activity and correlations have been found between the acrophases of many different groups. Masking can also alter the rhythms observed making it difficult to ascertain the different signals and their specific effects. Animal studies and particularly the use of transgenic mice may make the investigations easier and extend the limit of research possibilities.
  • Another difficulty relates to research into desynchronization of an endogenous rhythm which may be easy to observe, but difficult to interpret. For example, plasticity of systems and quick adaptability (eg. changes in response to jet lag changes occur within days and these are an example of transient desynchronization) may mask the effect of desynchronisation. Also some changes are examples of allochronism where one or several rhythms are desynchronized, but not to the detriment of the organism, hence the overall effect may be masked. However, some are examples may be of desynchronism, ie. those causing illness or changes that cannot be positively adapted to. Therefore, it can be difficult to interpret effects of the relevant endogenous rhythms, eg. drug administration can have immediate drastic changes on functioning, but hides more long-term subtle effects on other endogenous rhythms such as eating or sleeping.

Therefore, the topic of endogenous rhythms is interesting, but complicated and when we are talking about controlling rhythms and working to the advantage of individuals then we have to be aware that this is difficult to achieve and even more difficult to assess. Certain signals producing common rhythmic changes such as light-dark, sleep-wake, body temperature are probably better researched than the internal gene clock led rhythms. However, individual variations and the multitude of factors involved even in these systems make it difficult for definitive statements to be made. Availability of transgenic animal species, better testing, improved recording, and superior mathematical computer programmes for analysing data may lead to advancements in endogenous rhythms research both internally and externally instigated and if drug (or even diet) success can be improved by taking endogenous rhythms into consideration then the effort is worth it. This topic like the hypothesized Burr`s life fields and electric signals must be understood in order to completely explain human physiology.

Since we`re talking about the topic:

….can we assume that the circadian rhythms of sleep-wake, body temperature, physical activity of transgenic mice mimicking dementia are the same as normal control mice? Does the administration of anti-inflammatory drugs to those transgenic mice have any effect on these important endogenous rhythms?

…..could the grip strength of dominant hand and non-dominant hand and indicating brain hemisphere differences be investigated with reference to novel eating and sleeping patterns and certain drug administration regimes known to cause brain activity changes?

Posted in circadian rhythms, endogenous rhythms, Uncategorized | Tagged ,

Quantifying consciousness

Posted comment on ´The One-Second Test of Consciousness` by A. Ananthaswamy and published in New Scientist 20th February 2016 issue no. 3061 p. 10


Ananthaswamy describes in his article how scientific groups are beginning to formulate a method by which the conscious experience can be quantified. It involves calculating the value of phi which is defined as the level of integration of information of an informational processing system. Therefore, the hypothesis can apply to consciousness which follows the Integrated Information Theory. This theory, according to Tononi and described in Ananthaswamy`s article, says that each aspect of what we are aware of is unified and hence, it is difficult to be aware of a single aspect of an experience since the brain integrates the sensory data into one. For the system to be conscious according to Tononi the integrated information must be greater than the sum of the individual elements. Phi is, therefore a measure of the success of this integration.

Ananthaswamy reports in this article on one approach to calculating phi which involves how each part is dependent on the other. The ´cruellest` division designated by phi is given the value of zero and is attributed to non-conscious status. This is where the parts are the least dependent on each other. As dependency increases so does the value of phi. Although the concept is simple, the execution of it is very difficult, but Tegmark, an American cosmologist, however has devised a fast way of approximating it.

Tegmark in his method regards each neuron as a node and the interconnections between it and other cells as links. Each link is given a thickness proportional to the strength of the interconnection. The thinnest links in the network are then disregarded and this action is repeated resulting in a step by step occlusion method until a single interconnected network remains. Cutting this interconnection into two would approximate to the ´cruellest cut`.  By carrying out this method the time taken to find phi is drastically shortened eg. finding out phi for the human brain takes less than a second and hence, the technique can be applied to reveal the level of human awareness at any one time. Gazzaniga has already seen an application for the method in research and therapy of neuropsychiatric disorders. He suggests that by measuring phi in each brain hemisphere he can determine whether each side retains its own consciousness unaware of the other and expects that if this is the case then the hemispheric values of phi will be lower than for the control unseparated brain.

Following verification of the hypothesis, the method could also be used to identify people with consciousness disorders who may have been misdiagnosed. Ananthaswamy gives the example of the misdiagnosis of minimally conscious state for someone who is conscious, but completely paralysed. In this case, high values of phi should be obtained. Others in the neuroscience field are also enthusiastic about the new method for the quantification of phi and consciousness.


Ananthaswamy`s article describes one method of putting a mathematical value to the conscious experience and hence, quantifying it. The method described is that of Tegmark who based it on Tononi`s Integrated Information Theory and it consists of the mathematical value of phi depicting neural activity of one event using multiple characters eg. colour and thickness of lines representing the neuronal firing. Therefore, from a neuroscientific perspective it is probably easy to find the zero value of phi which represents unconscious and this creates a ´symbolic` lowest level value which can be expanded when there is conscious awareness of events. However, problems can be predicted with this method of identification of the conscious experience in the case of awareness since the method requires the disregarding of the weakest connections of the neural image obtained.

The first problem is that everyone is individual and therefore, controls of neuronal experience and awareness would be required. It would have to be known what firing patterns are observed for an individual under normal conditions in order to determine if changes are observed. Would the sight of a red, round ball in the left peripheral field give a distinct neural network firing pattern which would be drastically different to one if the ball is observed straight on? It is likely that for any event, huge neuronal areas would be alight, representing interconnected systems, event characteristics, emotional values and memories. The neuronal firing representing the characteristics of which we are aware of consist of only a small part of this. Conscious awareness means those characteristics that a person can report, but other areas systems also form part of the neural firing signature observed at that time. This view is supported by the consciousness theories such as Zeki microdomains of consciousness, 3d default space of Jerath as well as Local Recurrence Theory and Reentrant Dynamic Core Theory. Therefore, a quantitative assessment of phi would include not only the characteristics of which we are consciously aware, but a mass of unattended system-related firing that allows that conscious awareness to take place. The problem with this method of phi identification is that the firing patterns of these essential, but unconscious systems, may be stronger than the one depicting the red colour or rounded shape if we consider our example of the red, round ball conscious event.  In this case, by disregarding the weakest firing the question is are we going to end up with only the attentional engagement mechanism for example and not the features of the conscious event?  This may be acceptable since firing relating to employed systems is likely to be common in multiple individuals and could theoretically be excluded from the original firing images before the process of removing weakest connections begins. However, the level of attention may vary individual to individual and therefore, firing levels of even this system may also differ making it difficult to set up a computer or mathematical model to disregard these weakest links.

Another problem of the method that has to be considered is the delay between event presentation and conscious awareness. The event characteristics are observed 200-300millisecs before conscious awareness (the Libet delay). Therefore, when should the firing image for the calculation of phi be taken? If it is taken at a time point 200millisecs or later after the presentation of the image as would be indicative for conscious awareness, the firing experience may already be non-representative of the actual experience. Some firing neurons are already going into their refractory periods leading to firing of other neurons according to priority to the unattended, lateral inhibition visual rules for example and even reporting the conscious event will strengthen some firing and not others. Therefore, firing patterns change with time and this correlates to parallel drafts being edited as part of Dennett`s  Multiple Drafts Theory and the victorious assembly with groups constantly competing and modulating according to Edelman and Tononi`s Reentrant Dynamic Core theory. Firing patterns also change from top-down influences. Both from a consciousness perspective (Ramachandran`s filling in) and from the neuroscientific perspective of the influence of memory recall, the neural signature is affected by the application of information of reactivated memories to the conscious experience. Hence, phi quantification is affected not only by bottom-up input, but also top-down and therefore, is that neural pattern being observed and ready to serve the mathematical quantification of consciousness representing just the red, round ball or the red, round ball loved as a child?

Therefore, we have seen that the quantification of phi is difficult to measure from a neuronal firing perspective. It may be possible if we could take several images of different circumstances that vary in only one simple, single sense characteristic eg. red colour or silhouette shape and compare these  removing consistent firing areas independent of actual firing level. If we assume that those firing links removed represent the common cognitive systems employed by the individual, the firing remaining should represent the single characteristic of which the individual is conscious. This image could be tested against the presentation of the same characteristic at such a speed that it remains unconscious, or is presented in the peripheral view or if possible by using a participant who suffers from blindsight. The problem comes when the method is applied to multiple characteristics for one single sensory event and is even greater if multiple senses are employed. Mercel and slippage will also present a problem with integrated firing in a multisensory temporal binding window, phase locked according to Walter Freeman`s dynamic systems approach, but with a time delay for some sensory information and not for others. And who is to say that this is the same for everyone in every single circumstance?

Therefore, Tegmark`s method for the quantification of phi looks possible and advantageous on paper, but a couple of examples from the neuroscientific perspective shows that actual physical imaging and then interpretation of those images present problems. Therefore, is phi a psychological concept only? From the neuroscience view, there is no denying that Integrated Information Theory applies to neuronal firing and integrated firing leads to the formation of assemblies and perception for example, but the constant modulation of those assemblies due to even physiological restraints makes it difficult to use the information in a generic situation. Simplicity of the conscious event may aid the neural representation imaged, but conscious awareness is a fleeting event, delayed in real-time, influenced by top-down as well as bottom-up systems and only possible if proven by language or action which changes the overall picture observed. All of these factors will lead to changes in the neural pattern that is observed. Therefore, it may be that phi can only be defined at the zero unconscious level or default network level in humans in which case, it must be asked what value does an accurate calculation of phi actually have apart from saying something is not conscious?

Since we`re talking about the topic…………………………………………..

…Transcranial magnetic stimulation (TCMS) is said to distinguish between patients who are described as minimally conscious to those suffering from unresponsive wakefulness syndrome presumably by the differing responsiveness of the auditory, olfactory and touch senses and deep TCMS is said to decrease levels of self-awareness. Therefore, could TCMS be used to create the control images for the calculation of phi?

….if 40HZ (gamma) oscillations are disrupted accurately in distinct brain areas could differences in neural patterns because of lack of synchronicity and hence phi be measurable?

….can we assume that seeing external events and actions performed by ourselves reflected in a mirror will produce different neural patterns to that observed directly and could these be used to ascertain the value and differences in phi?


Posted in consciousness, Uncategorized | Tagged