NeuroLex Laboratories Diagnoses Health Conditions With A Voice Sample

NeuroLex laboratories

Some illnesses, like strep throat, take an easy lab test to diagnose. Others, like an autoimmune disorder, require a lot more in-depth testing and examination. And some, most notably neurological, behavioral/mental and cognitive disorders (think depression, psychosis, Alzheimer’s or Parkinson’s) are much trickier to diagnose, almost impossible to spot early on, and often missed until it’s almost too late.

NeuroLex Laboratories is addressing this issue by analyzing something we all have and use almost every waking hour — our voice. Their software uses a voice analysis test, instead of a blood test, to examine the waveform and actual text of a speech sample to diagnose certain conditions in a few minutes, often before the patient shows major symptoms.

For CEO and founder Jim Schwoebel, a Georgia Tech-trained biomedical engineer and co-founder/partner in Atlanta-based accelerator CyberLaunch, the potential of NeuroLex strikes close to home. He got the idea for the technology after seeing his brother hospitalized for a psychotic episode, eventually being diagnosed with schizophrenia. He wondered how these types of conditions could be diagnosed earlier on, before the patient was past the point of pre-clinical intervention.

The company is currently in the midst of over 20 research trials taking place around the world, helped by 30 fellows they have recruited to help gather a massive dataset on voice diagnosis. Though the Atlanta-based team is currently in an accelerator program in New York City, they’ve headed back this week for the Southeastern Medical Device Association’s (SEMDA) annual conference.

Hype talked to Schwoebel about the massive cost and time savings NeuroLex technology could create for patients and providers, his hopes for diagnosing any number of conditions much earlier than currently possible, and their next funding round — they’re looking now!

Describe how the NeuroLex voice test works.

Instead of going to a doctor and giving a blood or urine sample, we take a short speech sample from a patient through a 1-2 minute voice test. We extract features from the waveform (e.g. fundamental frequency) and also the text (e.g. semantic coherence) and create reports that output custom machine learning models for various diseases (psychosis, depression, Alzheimer’s disease, Parkinson’s disease, and more). In doing so, we can reduce average diagnosis cost from $15K to $10K, average diagnosis time from 5 years to 5 minutes, and improve patient outcomes by getting them to specialists sooner.

What is the market impact?

Most health conditions are treated after advanced symptoms are known, leading to poor patient outcomes and high costs. For example, a health-conscious pre-symptomatic schizophrenic patient visits a primary care doctor roughly 11 times and goes to 6 specialists before having a psychotic break. After this first episode of psychosis, there is an 80 percent relapse rate within 2 years. Thus, there is a high need to innovate this workflow to reduce costs and improve patient outcomes.

This market is very similar to the model of Quest Diagnostics, where they outsource blood tests and other lab work used to assess health of patients in primary care, except we are a cloud-based HIPAA-compliant test. There is over $1.1B per year in annual recurring revenue addressable across 4 indications (psychosis at $35M, depression at $1.05B, Alzheimer’s at $55M, and Parkinson’s disease at $1M).

The societal impact can also be huge. By year five we could have prevented 20,000 suicides/psychotic events and saved the healthcare system more than $3B.

How did you come up with this idea?

Eight years ago my brother suffered a psychotic hospitalization. When examining his voicemails, it sounded like you could predict his psychosis simply from voice features. At NeuroLex, we’re passionate about capturing disease symptoms early through voice tests in the same way blood tests are used today. In doing so, I hope we can avoid acute hospitalizations like my brother’s through simple interventions — things like sleeping more or seeing a psychotherapist.

Talk about the team you’ve built to develop this unique technology.

Our CTO, Drew Morris, was founding CTO in Hummingbird Technologies, and our CFO Kerry Byler was a a Managing Director with BlueCross BlueShield Venture Partners. James Fairey is our Chief Audio Officer; he’s also currently the Market Production Director for CBS Media Atlanta. We also have team members from the healthcare business industry, and deep learning and machine intelligence.

We’ve also recruited over 30 research fellows from across the world.

Funding or bootstrapped?

We recently raised $200K from Voicecamp, an NYC-based accelerator for companies building voice products. We’re in the accelerator program now.

What is your revenue model?

We will go to market on a consumer monthly subscription model — with a freemium to premium upgrade (similar to genetic testing company 23andMe’s model). We could also use a direct payment model by patients like Quest Diagnostics ($3-5 per speech biomarker in primary care).

Who are your competitors and why do you stand out?

We have some competitors — Winterlight laboratories, Sonde Health, and Cogito Health. We are different in that we are focused on building a test for use in primary care and our competitors are focused more on the consumer side or in a patient recruitment revenue model as a companion diagnostic for new drugs. For example, WinterLight Laboratories partnered with Eli Lily to help recruit for an Alzheimer’s study. Our competitors are still research-stage and really do not have a clear repeatable revenue stream.

We’re well poised to own the space — we have launched 12 pilots this quarter and have 50 research fellows actively collecting data for us from around the world. Our dataset will be the largest and that will fuel us ahead of our competitors.

You’ve been digging into a lot of research lately— What clinical evidence supports your approach and technology?

We have looked at data from our collaborators all over the world, and have seen 100 percent accuracy to detect psychosis using linguistic features (frequency, use of determiners, maximum phrase length, and the first-order semantic coherence). This was in a proof-of-concept study with 35 patients, which has been followed by similar studies with 100+ patients at various sites through our collaborators.

In looking at depression, a supervised learning support vector machine approach was used to detect suicidal with mental illness, mental illness and not suicidal, and control (no mental illness) with 85 percent accuracy. This study at 379 patients, and I think this paper, just published in November of last year, will be a landmark paper for suicide detection using voice.

The accuracy of audio analyses to diagnose Alzheimer’s Disease was statistically significant enough to show its assessment utility. And finally, the technology was shown to classify Parkinson’s patients versus controls with 75 percent accuracy.

Why is it important to clinically validate this technology?

As with any medical technology, there are risks to patients. Having a false positive test for depression could lead to unnecessary treatment by a physician or necessary drugs prescribed. Therefore, it is necessary that if we make any diagnostic claim that we validate that our technology works and make sure we attain regulatory clearance for any diagnostic claims.

Out of the 20 research pilots we are doing this quarter, we are going to choose one indication to go commercial with by Q1 2018. We expect follow-on clearances to be much easier after we have a predicate and we understand the risk tolerance of the FDA for our tests. With the recent clearances by 23andMe, it looks as if FDA is taking a more friendly stance on B2C health diagnostics, so we fit well with this shift.

The pros outweigh the risks in this case. We have no reliable blood or urine biomarker for various neurological or psychiatric diseases. Voice seems like a more reliable biomarker from the clinical evidence thus far. There is much promise in our approach to quantify the health of patients, reduce costs and improve outcomes.

What are the less-obvious applications of voice diagnosis?

We may find things like brain cancer in primary care that appear in the text or structure of the sentences before any symptoms appear. We might be able to find autism into adulthood due to hyperlexia and odd use of language. We can predict psychotic breaks before they occur through the structure of the sentences. We could potentially find sleeping disorders like insomnia or sleep apnea from the tone and speaking rate. The list is endless.

We don’t know until we do our research pilots. This is why we want to do so many pilots — this is an unexplored area and we want to be the world leader in pioneering this approach across the greatest number of conditions.

What are your next steps?

We’re going through the Voicecamp accelerator over the next few months in NYC. Meanwhile, many of our pilot studies are going on. Our goal is to get to 20 research pilots before the end of the year and a paid contract with a third-party insurance plan. The were just named a MedTech InnovatorTop 100 company, a ranking of medical device, digital health and diagnostic companies, and we’re presenting at SEMDA this week.

As we do this, we seek to raise a $1.5M seed round over the next 6-8 months, so if any investors are out there that like our approach, we’d love to talk to them.