Career reporter and Being Patient columnist Phil Gutis tries a next-generation new cognitive assessment, Cognivue’s AI-powered Clarity test. Here’s how it compares to the cognitive tests he knows so well.
I first saw the briefcase-sized computer monitor at the Alzheimer’s Association’s conference last summer in San Diego. Tom O’Neill, the president of a company called Cognivue, was at the conference, and he was eager to show me his tool to judge cognitive abilities.
The computer system, called Clarity, is essentially a video game to determine cognitive abilities. It includes a video monitor and a joystick. The goal is to keep up with computer prompts designed to judge a variety of cognitive skills: executive function, language, memory, delayed recall and abstraction. The system also measures reaction time and speed processing.
At the time, I watched O’Neill run through the system, and I was intrigued and honestly a bit scared. The 10-minute, self-administered test looked challenging with the screen moving very quickly between the various instructions and prompts. Unfortunately, we were never able to schedule a time at the conference to have my run-through, but we agreed that we would soon connect somewhere on the east coast.
My Clarity encounter finally occurred a few weeks ago in a suburban hotel conference room near my home in Bucks County, PA. O’Neill flew into Philadelphia to visit a childhood friend who lived nearby, and we coordinated a meeting.
I watched a short introductory video, and then the test began. Here’s how it works:
More on the test — and my results — in a moment. But first a bit of background.
Ever since my first visit to a trial center almost seven years ago to see whether I qualified for a clinical trial for an Alzheimer’s drug, I have taken endless cognitive tests. I’m also a participant in the Aging Brain Cohort (ABC) Study, a long-term observational program sponsored by the National Institute on Aging — which means more cognitive tests, since data is never shared between clinical studies.
The test I remember most is the one that begins with the tester saying that he or she is going to give us three things to remember. These days, having taken the test so many times, I jump in before they even finish: Apple, table, penny. That always gets a deep sigh from the test administrator and an expression of remorse from me. “Sorry,” I say. It’s just that I can’t forget those words.”
I’ve asked many friends who are also living with early Alzheimer’s and they too remember apple, table, penny. And we laugh at other parts of the test – identify a series of objects, a comb, a watch, a wallet. Pick up a blank piece of paper that’s provided, and put it on the floor. Pick it up again, fold it as though you’ve written a letter, and put it in the provided envelope, address the envelope to yourself, and draw a stamp in the corner where the stamp would go.
The first cognitive test I took as part of my Alzheimer’s journey was called Repeatable Battery for Assessment of Neuropsychological Status or R-Bans, which involves lists and stories that the participant is asked to recall. I took it as part of my initial assessment for a clinical trial.
In general, I find the tests to not be very challenging. But if I’m being fair, my score in my initial assessment was low enough to allow me to move forward through the screening process with an MRI or a PET scan and qualify for the aducanumab (brand name Aduhelm) trial. And I also understand that for people further along in the disease progression, the tests can indeed be challenging.
All of this is to say, that’s why I appreciated Cognivue’s test. It seemed challenging for people living in the early stages of the disease, and it also takes pains to make the test more accessible for those further along. In essence, the algorithm judges how you are doing and either speeds up if you are doing well, or slows down if you are struggling.
On my first try, I scored a 52, which O’Neill thought was a bit low. He urged me to try again.
So I tried again, and walked out of the conference room, saying that I did worse because it definitely felt harder. I scored a 62 though, and O’Neill explained that it felt harder because the algorithm sensed I was doing better and sped up the prompts.
O’Neill explained that test scores will vary slightly from test to test, but typically remain in the same range.
“An example of this would be if someone scores 62 on their first test and 68 on the next,” he said. “That would fall into the 51 to 75 score range and would be interpreted the same.”
On a scale of 0 to 100, Cognivue says that scores in the 75 to 100 range suggest no cognitive impairment. Low cognitive impairment is seen with scores of 51 to 74 while under anything under 50 is considered moderate to severe cognitive impairment. (O’Neill says he typically scores in the high 80s to low 90s.)
So my scores of 52 and 62 put me in the low-to-middle range (although the 52 was skirting very close to moderate to severe). In contrast, on the Mini-Cog and R-Bans, I tend to lose one or two points, which doesn’t feel like a realistic judgment of where I am on the cognitive scale. (One of my testers always wonders why I’m part of the trial.)
Cognivue received FDA approval for the Clarity test back in 2015, having demonstrated that Clarity was superior in accuracy to a commonly used paper test called the Saint Louis University Mental Status (or, strangely enough, SLUMS). The company notes that because the test is self-administered and self-scored, it eliminates the need for additional staff support and completely cuts possible bias and human error.
Cognivue also notes that it continues to test the system as part of clinical trials. Today, Cognivue is part of three studies: BioHermes, FOCUS and NEAR. O’Neill says the company’s investment in clinical studies is designed to confirm earlier study results and expand the use of the Clarity test.
Along with blood tests and other improved diagnostic tests, computerized, algorithm-powered cognitive tests like this one promise easier, more accurate diagnosis for people concerned about memory.
From this layman’s perspective, using tools like Clarity as part of cognitive tests by primary care physicians, clinical trial centers, and neurologists alike will make it easier to diagnose cognitive issues and improve diagnostic certainty: a technological win-win all around.
Phil Gutis is a former New York Times reporter and current Being Patient contributor who was diagnosed with early onset Alzheimer’s. This article is part of his Phil’s Journal series, chronicling his experience living with Alzheimer’s and his participation in the aducanumab clinical trial.