(Image caption: New model mimics the connectivity of the brain by connecting three distinct brain regions on a chip. Credit: Disease Biophysics Group/Harvard University)
Multiregional brain on a chip
Harvard University researchers have developed a multiregional brain-on-a-chip that models the connectivity between three distinct regions of the brain. The in vitro model was used to extensively characterize the differences between neurons from different regions of the brain and to mimic the system’s connectivity.
The research was published in the Journal of Neurophysiology.
“The brain is so much more than individual neurons,” said Ben Maoz, co-first author of the paper and postdoctoral fellow in the Disease Biophysics Group in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “It’s about the different types of cells and the connectivity between different regions of the brain. When modeling the brain, you need to be able to recapitulate that connectivity because there are many different diseases that attack those connections.”
“Roughly twenty-six percent of the US healthcare budget is spent on neurological and psychiatric disorders,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics Building at SEAS and Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University. “Tools to support the development of therapeutics to alleviate the suffering of these patients is not only the human thing to do, it is the best means of reducing this cost.“
Researchers from the Disease Biophysics Group at SEAS and the Wyss Institute modeled three regions of the brain most affected by schizophrenia — the amygdala, hippocampus and prefrontal cortex.
They began by characterizing the cell composition, protein expression, metabolism, and electrical activity of neurons from each region in vitro.
“It’s no surprise that neurons in distinct regions of the brain are different but it is surprising just how different they are,” said Stephanie Dauth, co-first author of the paper and former postdoctoral fellow in the Disease Biophysics Group. “We found that the cell-type ratio, the metabolism, the protein expression and the electrical activity all differ between regions in vitro. This shows that it does make a difference which brain region’s neurons you’re working with.”
Next, the team looked at how these neurons change when they’re communicating with one another. To do that, they cultured cells from each region independently and then let the cells establish connections via guided pathways embedded in the chip.
The researchers then measured cell composition and electrical activity again and found that the cells dramatically changed when they were in contact with neurons from different regions.
“When the cells are communicating with other regions, the cellular composition of the culture changes, the electrophysiology changes, all these inherent properties of the neurons change,” said Maoz. “This shows how important it is to implement different brain regions into in vitro models, especially when studying how neurological diseases impact connected regions of the brain.”
To demonstrate the chip’s efficacy in modeling disease, the team doped different regions of the brain with the drug Phencyclidine hydrochloride — commonly known as PCP — which simulates schizophrenia. The brain-on-a-chip allowed the researchers for the first time to look at both the drug’s impact on the individual regions as well as its downstream effect on the interconnected regions in vitro.
The brain-on-a-chip could be useful for studying any number of neurological and psychiatric diseases, including drug addiction, post traumatic stress disorder, and traumatic brain injury.
"To date, the Connectome project has not recognized all of the networks in the brain,” said Parker. “In our studies, we are showing that the extracellular matrix network is an important part of distinguishing different brain regions and that, subsequently, physiological and pathophysiological processes in these brain regions are unique. This advance will not only enable the development of therapeutics, but fundamental insights as to how we think, feel, and survive.”
A research group at MIT has created a new class of fast-acting, soft robots from hydrogels. The robots are activated by pumping water in or out of hollow, interlocking chambers; depending on the configuration, this can curl or stretch parts of the robot. The hydrogel bots can move quickly enough to catch and release a live fish without harming it. (Which is a feat of speed I can’t even manage.) Because hydrogels are polymer gels consisting primarily of water, the robots could be especially helpful in biomedical applications, where their components may be less likely to be rejected by the body. For more, see MIT News or the original paper. (Image credit: H. Yuk/MIT News, source; research credit: H. Yuk et al.)
Suppose you woke up in your bedroom with the lights off and wanted to get out. While heading toward the door with your arms out, you would predict the distance to the door based on your memory of your bedroom and the steps you have already made. If you touch a wall or furniture, you would refine the prediction. This is an example of how important it is to supplement limited sensory input with your own actions to grasp the situation. How the brain comprehends such a complex cognitive function is an important topic of neuroscience.
Dealing with limited sensory input is also a ubiquitous issue in engineering. A car navigation system, for example, can predict the current position of the car based on the rotation of the wheels even when a GPS signal is missing or distorted in a tunnel or under skyscrapers. As soon as the clean GPS signal becomes available, the navigation system refines and updates its position estimate. Such iteration of prediction and update is described by a theory called “dynamic Bayesian inference.”
In a collaboration of the Neural Computation Unit and the Optical Neuroimaging Unit at the Okinawa Institute of Science and Technology Graduate University (OIST), Dr. Akihiro Funamizu, Prof. Bernd Kuhn, and Prof. Kenji Doya analyzed the brain activity of mice approaching a target under interrupted sensory inputs. This research is supported by the MEXT Kakenhi Project on “Prediction and Decision Making” and the results were published online in Nature Neuroscience on September 19th, 2016.
The team performed surgeries in which a small hole was made in the skulls of mice and a glass cover slip was implanted onto each of their brains over the parietal cortex. Additionally, a small metal headplate was attached in order to keep the head still under a microscope. The cover slip acted as a window through which researchers could record the activities of hundreds of neurons using a calcium-sensitive fluorescent protein that was specifically expressed in neurons in the cerebral cortex. Upon excitation of a neuron, calcium flows into the cell, which causes a change in fluorescence of the protein. The team used a method called two-photon microscopy to monitor the change in fluorescence from the neurons at different depths of the cortical circuit (Figure 1).
(Figure 1: Parietal Cortex. A depiction of the location of the parietal cortex in a mouse brain can be seen on the left. On the right, neurons in the parietal cortex are imaged using two-photon microscopy)
The research team built a virtual reality system in which a mouse can be made to believe it was walking around freely, but in reality, it was fixed under a microscope. This system included an air-floated Styrofoam ball on which the mouse can walk and a sound system that can emit sounds to simulate movement towards or past a sound source (Figure 2).
(Figure 2: Acoustic Virtual Reality System. Twelve speakers are placed around the mouse. The speakers generate sound based on the movement of the mouse running on the spherical treadmill (left). When the mouse reaches the virtual sound source it will get a droplet of sugar water as a reward)
An experimental trial starts with a sound source simulating a distance from 67 to 134 cm in front of and 25 cm to the left of the mouse. As the mouse steps forward and rotates the ball, the sound is adjusted to mimic the mouse approaching the source by increasing the volume and shifting in direction. When the mouse reaches just by the side of the sound source, drops of sugar water come out from a tube in front of the mouse as a reward for reaching the goal. After the mice learn that they will be rewarded at the goal position, they increase licking the tube as they come closer to the goal position, in expectation of the sugar water.
The team then tested what happens if the sound is removed for certain simulated distances in segments of about 20 cm. Even when the sound is not given, the mice increase licking as they came closer to the goal position in anticipation of the reward (Figure 3). This means that the mice predicted the goal distance based on their own movement, just like the dynamic Bayesian filter of a car navigation system predicts a car’s location by rotation of tires in a tunnel. Many neurons changed their activities depending on the distance to the target, and interestingly, many of them maintained their activities even when the sound was turned off. Additionally, when the team injects a drug that suppresses neural activities in a region of the mice’s brains, called the parietal cortex they find that the mice did not increase licking when the sound is omitted. This suggests that the parietal cortex plays a role in predicting the goal position.
(Figure 3: Estimation of the goal distance without sound. Mice are eager to find the virtual sound source to get the sugar water reward. When the mice get closer to the goal, they increase licking in expectation of the sugar water reward. They increased licking when the sound is on but also when the sound is omitted. This result suggests that mice estimate the goal distance by taking their own movement into account)
In order to further explore what the activity of these neurons represents, the team applied a probabilistic neural decoding method. Each neuron is observed for over 150 trials of the experiment and its probability of becoming active at different distances to the goal could be identified. This method allowed the team to estimate each mouse’s distance to the goal from the recorded activities of about 50 neurons at each moment. Remarkably, the neurons in the parietal cortex predict the change in the goal distance due to the mouse’s movement even in the segments where sound feedback was omitted (Figure 4). When the sound was given, the predicted distance from the sound became more accurate. These results show that the parietal cortex predicts the distance to the goal due to the mouse’s own movements even when sensory inputs are missing and updates the prediction when sensory inputs are available, in the same form as dynamic Bayesian inference.
(Figure 4: Distance estimation in the parietal cortex utilizes dynamic Bayesian inference. Probabilistic neural decoding allows for the estimation of the goal distance from neuronal activity imaged from the parietal cortex. Neurons could predict the goal distance even during sound omissions. The prediction became more accurate when sound was given. These results suggest that the parietal cortex predicts the goal distance from movement and updates the prediction with sensory inputs, in the same way as dynamic Bayesian inference)
The hypothesis that the neural circuit of the cerebral cortex realizes dynamic Bayesian inference has been proposed before, but this is the first experimental evidence showing that a region of the cerebral cortex realizes dynamic Bayesian inference using action information. In dynamic Bayesian inference, the brain predicts the present state of the world based on past sensory inputs and motor actions. “This may be the basic form of mental simulation,” Prof. Doya says. Mental simulation is the fundamental process for action planning, decision making, thought and language. Prof. Doya’s team has also shown that a neural circuit including the parietal cortex was activated when human subjects performed mental simulation in a functional MRI scanner. The research team aims to further analyze those data to obtain the whole picture of the mechanism of mental simulation.
Understanding the neural mechanism of mental simulation gives an answer to the fundamental question of “How are thoughts formed?” It should also contribute to our understanding of the causes of psychiatric disorders caused by flawed mental simulation, such as schizophrenia, depression, and autism. Moreover, by understanding the computational mechanisms of the brain, it may become possible to design robots and programs that think like the brain does. This research contributes to the overall understanding of how the brain allows us to function.
Wanting to feel productive, the grad student prints multiple articles with reckless abandon.
Packing numerous books and papers that he plans to read over winter break, the grad student deludes himself.
Do you know anyone prone to pleonasm?
Read the full definition here: http://www.dictionary.com/wordoftheday/2016/11/16?param=social
Method of teaching.. method of communication
Just when lighting aficionados were in a dark place, LEDs came to the rescue. Over the past decade, LED technologies – short for light-emitting diode – have swept the lighting industry by offering features such as durability, efficiency and long life.
Now, Princeton engineering researchers have illuminated another path forward for LED technologies by refining the manufacturing of light sources made with crystalline substances known as perovskites, a more efficient and potentially lower-cost alternative to materials used in LEDs found on store shelves.
The researchers developed a technique in which nanoscale perovskite particles self-assemble to produce more efficient, stable and durable perovskite-based LEDs. The advance, reported January 16 in Nature Photonics, could speed the use of perovskite technologies in commercial applications such as lighting, lasers and television and computer screens.
“The performance of perovskites in solar cells has really taken off in recent years, and they have properties that give them a lot of promise for LEDs, but the inability to create uniform and bright nanoparticle perovskite films has limited their potential,” said Barry Rand an assistant professor of electrical engineering and the Andlinger Center for Energy and the Environment at Princeton.
Read more.