For nearly 30 years, Rhoda Au, the director of neuropsychology at the Framingham Heart Study (FHS), has been looking at how researchers can improve diagnostic tests used to predict who will develop Alzheimer’s. Most recently, she has studied digital technology from the FHS, which began in 1948 and has greatly increased researchers’ understanding of what causes cardiovascular disease. The study has followed residents of Framingham, Massachusetts for three generations. Now, Au is using digital voice recordings and a digital pen to determine whether changes in residents’ voice recordings or the way they write could help researchers understand whether or not they will develop Alzheimer’s. Alzheimer’s advocates like Bill Gates have taken notice: “I recently met a researcher named Rhoda Au who is working on some seriously cool ways to detect Alzheimer’s. If her research proves successful, we might one day predict whether you will get the disease by simply listening to the sound of your voice or watching how you write with a pen,” he wrote in a recent blog post.
- Au said her team plans to continue analyzing the voice recordings to see how accurately these analyses could predict who will develop Alzheimer’s
- The team is using a digital pen to track how changes in response times related to the clock-drawing test could determine whether someone is going to get Alzheimer’s
Being Patient spoke to Au about her work on the FHS, what speech features she is focusing on and how a digital pen could answer questions about someone’s decision-making processes or response times.
Heart Disease and Alzheimer’s Risk: What’s the Connection?
Being Patient: What have you learned from the Framingham Heart Study about how lifestyle could impact dementia risk?
Rhoda Au: The Framingham Heart Study began in 1948 with an original cohort of over 5,000 participants in the town of Framingham, Massachusetts. Then in 1971, their children and their children’s spouses were brought into the study, and then, their grandchildren were also brought in, in 2002. When Framingham first started, we didn’t know the causes of heart disease and stroke. The NIH had launched Framingham as a 20-year study to see if we could find those determinants. Through Framingham, the founders of this project were able to identify all the cardiovascular risk factors that we take for granted today. When you go to the doctor’s office, you get your blood pressure measured, your weight, your cholesterol, they ask if you smoke, have diabetes, etc. Those are all things that within the Framingham Heart Study, researchers determined are related to heart disease and stroke. Those are factors that we try to mediate. Now, if you look at the literature, we’re starting to say, oh, but people with diabetes are at higher risk for dementia, as well as people with high blood pressure. It’s the same relationship. That’s where Framingham has really contributed in allowing us to make that connection in a much stronger way.
Speech Tests for Alzheimer’s — What to Look for
Being Patient: Can you tell us about how you’re using voice recognition to detect Alzheimer’s?
Rhoda Au: In my search for finding a better way to assess people, when you’re giving a neuropsychological test, you’re asking people questions and looking for them to give you a correct response. That’s what we record and how we give people a score. But when you test people, they give you lots of responses: some that are correct and some that are less so. I realized there’s a richness in their response. This is not something that I discovered. It’s something that was championed by a clinical neuropsychologist in Boston, Edith Caplan. She had promoted the idea of what we call the Boston Process Approach. It’s not about what your final response was, but how you got to that response. We have a test called the block design and ask you to create figures with that. She says that if you create the figure in a wrong way, you get a score of zero. But if the person is eating the blocks, they still get a score of zero. Clearly, those are people in two very different states. She trained us to think about how you get to that response.
I wanted to capture all these responses, but you can’t write it down when people are saying it, so I realized that I have to record them. I have to record them so that I can go back and capture all these responses. I accidentally created the voice recordings because I was really there trying to figure out what are all those other things that they’re saying? And I wasn’t thinking about, oh the voice itself could be diagnostic. We started doing that in 2005. When Siri came out, I realized, wow, voice recognition and analysis is becoming really sophisticated and I’ve been recording these people’s responses to neuropsychological tests. I realized that in itself is data. Because when you are testing people, you can hear differences over time. They may still test well, but they’re starting to hesitate, have more difficulty finding the right word, they may choose a different one. There’s lots of strategies people can have that makes them still look like they normally are, but that they’re starting to shift.
It took a number of years because this is a very new concept. With help from some colleagues at MIT, we were able to get some funding to do an initial proof of concept study where we’re able to take voice recordings from people we knew were cognitively impaired and people we knew were not. We subjected that to a bunch of voice analysis, looking at speech text features, audio qualities like the change in pitch or tone, hesitation, pauses, stutters, fragmented sentences and a whole host of things. On the basis of using some advanced analytics and machine learning approaches, we were able to differentiate those people who were cognitively impaired versus those who weren’t on the basis of their voice recording. That’s the initial project.
Being Patient: What did you detect that equates someone’s voice with cognitive decline?
Rhoda Au: Right now, what we’re doing is finding that it’s a profile. Cognition is a very complex exercise. I don’t think you’ll ever find one measure that really reflects people’s cognitive capabilities. In this voice analysis, I think one of the highly predictive models was about 256 different features that are a combination. We find a lot of information just in the audio quality, but then if you add some of the speech-to-text features like the language features, word selections, number of words or complexity of words, that can add to that. We also need to keep in mind that this is going to be relative to an individual’s baseline. It’s very important to have these longitudinal recordings over time so that we can see shifts. Everybody has a different way of speaking. It’s harder to compare one person to another person. It’s better to compare it to their own and see how they’re changing across a number of different measures. That’s where we need to get. We have the data, but have not yet been able to do those analyses.
How Well Do Voice Tests Predict Alzheimer’s?
Being Patient: Can you determine when people will develop Alzheimer’s based on this analysis or will that be part of a later study?
Rhoda Au: It’s still part of a later study. So we only have done the proof of concept so far. We have almost 9,000 recordings from 2005 until now and we only took a subset of 200 recordings. It’s a fair amount of work to process these recordings in a proper way for analysis. We have collected some of these recordings longitudinally and some of those people have progressed to Alzheimer’s disease. Now, we’re in a position where we can go back and look at their recordings earlier on to see if we can see some of those signals that would then separate who goes on to the path of Alzheimer’s disease versus not. That’s the stage we’re at right now. As a researcher, I have to get funding for the research that I do, and we’re seeking those opportunities to look at it longitudinally. I think that’s where we’ll be able to determine whether we can find these cognitive biomarkers in people’s voices.
Can Your Handwriting Also Indicate Alzheimer’s Risk?
Being Patient: You’re also using a digital pen to study whether someone’s writing can predict if they will develop Alzheimer’s. Can you tell us more about that research?
Rhoda Au: We started using the digital pen at Framingham in 2011 with a clock drawing test. This was not something that I developed. These were colleagues at MIT and Lahey [Hospital] who had developed the technology of using the pen and then interpreting the pen strokes that were gathered from this digital pen into derived measures.
In the same way that I talked about looking at the voice, there’s a lot of indices that you can pick up on when you’re tracking that kind of behavior.
For instance, if you think about a clock drawing test, we can talk about how long it takes you to draw the clock face, the length of pauses between drawing the clock face and drawing the next thing that’s a decision-making pause. So when you’re constructing this whole clock, we’re able to pick up on where you’re making decision-making points. Those decision-making latencies, as they get longer, may be a reflection that there are changes going on in your underlying ability to do cognitive processing. That would be an example.
We have been playing around with a Bluetooth pen. We’re working with colleagues at Shanghai University and looking at when the pen is on paper versus off paper so we can differentiate thinking time versus actual drawing time. We can detect things like hovering, so are you holding that pen in the air? We can also look at thinking, or are you moving that pen around to figure out where to go next? This just gives us further insight.
We like to think about all of this as giving us a little window into the brain as people are actually doing the task itself. So hopefully, 10–20 years from now, these studies will be very successful in determining cognitive decline at a much earlier stage.