A simulation of an area in the brain—the first project to emerge from Johns Hopkins’ new Computational Medicine Core—could help researchers better understand how humans process sound.
Once perfected, it could be used to understand why some people, such as those with schizophrenia or tinnitus, hear things that don’t exist, or why others, like people with autism, are overly sensitive to background noise.
“I think this small prototype can improve our understanding of what goes on in the brain of normal listeners and in those who do not process sound normally,” says auditory neuroscientist Dana Boatman.
This model of auditory cortex—representing a patch of primary auditory cortex that processes sound frequencies—is detailed down to the cellular level. Computational neuroscientist Pawel Kudela took roughly two months of the Army Research Office-funded project to build the model with GENESIS simulation software.
The team is now refining the simulation and examining how it “reacts” to sound compared to human versions. To do this, data representing sound are fed into the model. The results are then compared to Boatman’s studies of electrical activity in the brains of real people hearing sounds.
It’s one of several projects being taken on by the recently formed Computational Medicine Core. Created by Kudela, neurosurgeon William Anderson and Institute for Computational Medicine Director Raimond Winslow, the core provides Johns Hopkins clinical researchers with mathematical models of complex biological systems.
“Computational modeling techniques have proven themselves to be so important in other fields, like the natural sciences and engineering, and they’re beginning to penetrate medicine as well,” says Anderson. “Mathematical models provide quick hypothesis and theory testing platforms that you can then use to look at the real data in a more enlightened fashion.”
To add computational modeling to your biomedical research projects, email Pawel Kudela at [email protected].