Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
1
2
3
4
5
6
7
8
9
10
14
15
16
17
18
19
20
21
22
24
25
27
31
12:00 AM - EXPO.health
1
2
3
4
11 Jul
2019-07-11 - 2019-07-13    
All Day
2019 Annual Meeting and Scientific Seminar is Oraganized by American College of Neuropsychiatrists/American College of Osteopathic Neurologists and Psychiatrists (ACN/ACONP) and will be held from [...]
Breast Cancer: New Horizons, Current Controversies 2019
2019-07-11 - 2019-07-13    
All Day
Breast Cancer: New Horizons, Current Controversies is organized by Harvard Medical School (HMS) and will be held from Jul 11 - 13, 2019 at Boston [...]
11 Jul
2019-07-11 - 2019-07-12    
All Day
Pediatric Colorectal Scientific Meeting (PCSM) is organized by Intermountain Healthcare Interprofessional Continuing Education (IPCE) and will be held from Jul 11 - 12, 2019 at [...]
12 Jul
2019-07-12 - 2019-07-14    
All Day
Infectious Disease for Primary Care is organized by Medical Education Resources (MER) and will be held from Jul 12 - 14, 2019 at Disney's Contemporary [...]
12 Jul
2019-07-12 - 2019-07-14    
All Day
Dermatology for Primary Care is organized by Medical Education Resources (MER) and will be held from Jul 12 - 14, 2019 at Disney's Grand Californian [...]
12 Jul
2019-07-12 - 2019-07-14    
All Day
Office Orthopedics for Primary Care is organized by Medical Education Resources (MER) and will be held from Jul 12 - 14, 2019 at Bellagio Hotel [...]
13 Jul
2019-07-13 - 2019-07-19    
All Day
Association for Healthcare Philanthropy (AHP) Madison Institute is organized by Association for Healthcare Philanthropy (AHP) and will be held during Jul 13 - 19, 2019 [...]
13 Jul
2019-07-13 - 2019-07-14    
All Day
Red Cells Gordon Research Seminar (GRS) is organized by Gordon Research Conferences (GRC) and will be held from Jul 13 - 14, 2019 at Salve [...]
47th Annual Institute and Conference - "Advancing Nursing Practice: Innovation, Access and Health Equity"
2019-07-23 - 2019-07-28    
All Day
47th Annual Institute and Conference - "Advancing Nursing Practice: Innovation, Access and Health Equity" is organized by National Black Nurses Association (NBNA), Inc. and will [...]
2nd International Conference on  Medical and Health Science
2019-07-26 - 2019-07-27    
All Day
Date: July 26-27, 2019 Melbourne, Australia Theme: Scrutinize the Modish of Medical and Health Science "2nd International Conference on Medical and Health Science" on July [...]
Pediatric and Adolescent Medicine, Pediatric Critical Care, Developmental Pediatrics, and ADHD
2019-07-26 - 2019-08-02    
All Day
Pediatric and Adolescent Medicine, Pediatric Critical Care, Developmental Pediatrics, and ADHD is organized by Continuing Education, Inc and will be held from Jul 26 - [...]
Cosmetic Pearls for the General Dental Practitioner
2019-07-26 - 2019-08-02    
All Day
Cosmetic Pearls for the General Dental Practitioner is organized by Continuing Education, Inc and will be held from Jul 26 - Aug 02, 2019 at [...]
Neuroethology: Behavior, Evolution and Neurobiology Gordon Research Conference (GRC) 2019
2019-07-28 - 2019-08-02    
All Day
Neuroethology: Behavior, Evolution and Neurobiology Gordon Research Conference (GRC) is organized by Gordon Research Conferences (GRC) and will be held from Jul 28 - Aug [...]
Molecular and Cellular Biology of Lipids Gordon Research Conference (GRC) 2019
2019-07-28 - 2019-08-02    
All Day
Molecular and Cellular Biology of Lipids Gordon Research Conference (GRC) is organized by Gordon Research Conferences (GRC) and will be held from Jul 28 - [...]
37th Annual Conference on Pediatric Infectious Diseases
2019-07-28 - 2019-08-02    
All Day
37th Annual Conference on Pediatric Infectious Diseases is organized by Children's Hospital Colorado and will be held from Jul 28 - Aug 02, 2019 at [...]
32nd Annual Summer Seminar in Health Care Ethics & Surgical Ethics
2019-07-29 - 2019-08-02    
All Day
32nd Annual Summer Seminar in Health Care Ethics & Surgical Ethics is organized by University of Washington School of Medicine (UWSOM) Continuing Medical Education (CME) [...]
3-Day Physician Assistant PANCE / PANRE Board Review Course by Certified Medical Educators (CME) - Salt Lake City
2019-07-29 - 2019-07-31    
All Day
3-Day Physician Assistant PANCE / PANRE Board Review Course is organized by Certified Medical Educators (CME) and will be held from Jul 29 - 31, [...]
Four Week Radiologic Pathology Correlation Course (Jul 29 - Aug 23, 2019)
2019-07-29 - 2019-08-23    
All Day
Four Week Radiologic Pathology Correlation Course is organized by American Institute for Radiologic Pathology (AIRP) and will be held from Jul 29 - Aug 23, [...]
Third Annual Philadelphia Trauma Training Conference
2019-07-30 - 2019-08-01    
All Day
Third Annual Philadelphia Trauma Training Conference is organized by Thomas Jefferson University (TJU) and will be held from Jul 30 - Aug 01, 2019 at [...]
IDAA Annual Meeting 2019
2019-07-31 - 2019-08-04    
All Day
International Doctors in Alcoholics Anonymous (IDAA) 70th Annual Meeting 2019 is organized by International Doctors in Alcoholics Anonymous (IDAA) and will be held from Jul [...]
EXPO.health
2019-07-31 - 2019-08-02    
All Day
EXPO.health Schedule July 31 - August 2, 2019 - Location: Boston, MA Join us at EXPO.health (Formerly Healthcare IT Expo – HITExpo) 2019 happening July [...]
01 Aug
2019-08-01 - 2019-08-03    
All Day
UCSF CME: Neurosurgery Update 2019 is organized by The University of California, San Francisco (UCSF) Office of Continuing Medical Education and will be held from [...]
PBI Medical Ethics & Professionalism (ME-22) - Irvine
2019-08-02 - 2019-08-03    
All Day
PBI Medical Ethics & Professionalism (ME-22) is organized by Professional Boundaries, Inc. (PBI) and will be held from Aug 02 - 03, 2019 at Wyndham [...]
The 8th Beijing International Top Health & Medical Exhibition (BIHM)
2019-08-02 - 2019-08-04    
All Day
The 8th Beijing International Private Health and Medical Exhibition will be held at the China International Exhibition Center from August 2nd to August 4th, 2019. [...]
Angiogenesis Gordon Research Seminar (GRS) 2019
2019-08-03 - 2019-08-04    
12:00 am
Angiogenesis Gordon Research Seminar (GRS) is organized by Gordon Research Conferences (GRC) and will be held from Aug 03 - 04, 2019 at Salve Regina [...]
Lung Development, Injury and Repair Gordon Research Seminar (GRS) 2019
2019-08-03 - 2019-08-04    
All Day
Lung Development, Injury and Repair Gordon Research Seminar (GRS) is organized by Gordon Research Conferences (GRC) and will be held from Aug 03 - 04, [...]
Platelet Rich Plasma for Aesthetics Course - Miami (Aug 2019)
Platelet Rich Plasma for Aesthetics Course is organized by Empire Medical Training (EMT), Inc and will be held on Aug 04, 2019 at GALLERYone - [...]
Physician Medical Weight Loss Training (Aug 04, 2019)
2019-08-04    
All Day
Physician Medical Weight Loss Training is organized by Empire Medical Training (EMT), Inc and will be held on Aug 04, 2019 at The Platinum Hotel [...]
Events on 2019-07-11
Events on 2019-07-30
Events on 2019-07-31
IDAA Annual Meeting 2019
31 Jul 19
Knoxville
EXPO.health
31 Jul 19
Boston
Events on 2019-08-01
01 Aug
Articles

Challenges in Measuring Automatic Transcription Accuracy

This post continues our series of articles on Automatic Speech Recognition, the foundational technology that powers Descript’s automatic transcription. The marquee article in this series will test the accuracy rates of today’s biggest ASR vendors — like Google, Amazon, and IBM. Before we publish the results, we wanted to explore the reasons why declaring one ASR provider to rule them all is a bit trickier than it sounds.

Over the last couple of years you may have seen headlines proclaiming that AI-enhanced computers have reached parity (and even surpassed!) the speech recognition capabilities of humans. It’s a claim that’s both exciting and — given the “creative” interpretations of voice assistants like Siri and Alexa — tough to swallow.

Speech recognition has gotten better, sure. But try using your phone to record a typical, noisy meeting in a boomy conference room—then pass the resulting audio through one of the leading automatic speech recognition engines. You’re liable to wind up with something closer to word salad than meeting minutes.

So what are these researchers on about? To understand why their claims actually have merit — and the associated caveats—we need to explore the industry’s standard accuracy test, Word Error Rate.

How Word Error Rate Works

Measuring transcription accuracy seems like a task that should be reasonably straightforward: you tally how many words the transcription engine gets correct, contrast that with how many it got wrong — and there you go… Right?

And indeed, that’s essentially how the experts do it. They use fancy math formulas and terms like Word Error Rate (WER) and Levenshtein distance, but conceptually it’s pretty intuitive: words wrong, divided by the number of words that should be there. It’s a linguistic batting average.

At a high level, WER works like this: add up the number of words that the ASR engine got wrong — namely words that have been incorrectly Inserted, Deleted, or Substituted — and divide that by the number of words that should be in the transcript. The resulting percentage is your Word Error Rate.

Now, in order to discern what the ASR engines are getting right and wrong we need to have an accurate transcript to compare to. These are called reference or ‘ground truth’ transcripts, and they’re hand-transcribed and checked by humans. Each reference transcript is then automatically aligned with its ASR-generated counterpart, so the test can tell which words are supposed to be where. This is important: if the test isn’t using the optimal alignment, it can count what should be a single Substitution error as a pair of Insertion/Deletion errors, inflating the WER.

You may be wondering how WER handles stylistic differences. For example, some ASR engines will transcribe numbers as words, while others use the corresponding digits (1, 3, 5). And if an ASR engine says “going to” but the source transcript says “gonna” — what then? Such cases are addressed via a normalization process that specifies which contractions are valid, that “Street” and “St.” mean the same thing, and so on.

Issues with WER

The fundamental problem with WER is that every word is worth the same number of points. Whether it’s a name or adjective, “a” or “Antarctica” — they all count the same.

Of course, reality tends to disagree: anyone could tell you that not all words in a sentence are equally important — and that some errors matter more than others. But because these factors depend on context and meaning, it’s difficult to develop a test that can be broadly applied without a litany of caveats.

Which is why you’re reading a litany of caveats.

Along with ignoring the importance of words, WER is also a brutally harsh judge: it gives no partial credit. Even if a mis-transcribed word is just one character off, WER treats it the same as a complete, nonsensical whiff.

Now consider the following two sentences:

  • It’s a matter of free peach.
  • It’s a matter of free.

Using Word Error Rate, these two sentences would receive the same score: it’s just as bad to transcribe “peach” as it is to simply omit the word. To a human, the first sentence is obviously more useful — but WER doesn’t care (granted, if the ASR engine guessed “free lasagna” nobody would be campaigning for partial credit).

Another issue with WER is its total disregard for speaker labels and punctuation. These may or may not be important, depending on your use-case—but it’s obviously a major simplification.

It’s also worth considering what we even mean by “accuracy” in this context. A 100%-verbatim transcript is likely to include many words that are essentially meaningless: “uhms”, “uhs”, false starts, and duplicates — words that can actually interfere with reading comprehension. We can tweak the test to account for some of this, but it’s a good reminder that WER is just a proxy for evaluating how transcripts will be used in the real world.

Better than the Rest

Despite these compromises, Word Error Rate is the most widely-used measure of transcription accuracy by a long shot, and it’s what we use for our testing. While imperfect, its prevalence and endurance in the field attest to its utility all the same.

There’s also a body of evidence that shows that WER correlates with other measures of accuracy that the test itself doesn’t take into account, like Keyword Error Rate — which weights each word depending on its likely importance (and is vastly more complex to calculate). After conducting an experiment comparing the two metrics, researchers concluded “the use of Word Error Rate is sufficient especially for cases where WER remains below 25%.”

Even WER’s critics begrudgingly admit its supremacy. In a research paper asking Does WER Really Predict Performance? — which is generally fairly critical of WER — the authors state the following:

“The purpose of this paper is not to postulate a better alternative to WER for evaluating transcript quality; we stipulate that no better alternative likely exists if the task at hand is taken to be speech transcription for its own sake.”

WE’Re Winning!

In recent years, researchers from Baidu, IBM, Microsoft, and Google (among others) have been sprinting toward wringing ever-lower Word Error Rates from their speech recognition engines — with remarkable results.

Spurred by advances involving neural networks and deep learning, along with massive datasets compiled by these tech giants, WERs have improved enough to generate headlines about meeting and surpassing human efficiency, based on findings that professional human transcriptionists have a WER of around 5.15.9% (people mishear things a lot!).

In contrast, Microsoft researchers report their ASR engine has a WER of 5.1%; IBM Watson’s 5.5%. And Google claims an error rate of just 4.9%.

WERs — Based on published research papers

The catch is that most of these tests were conducted using the same set of audio recordings: namely a corpus called Switchboard, which consists of a large number of recorded phone conversations spanning a broad array of topics. Switchboard has been used in the field for many years and is nearly ubiquitous in the current literature—so it’s a reasonable choice. By testing against the same audio corpus, researchers can make apples-to-apples comparisons between themselves and competitors. (Google is the exception; it uses its own, internal test corpus, which is opaque to outsiders).

But this homogeneity leads to a sort of tunnel vision: those claims of surpassing human transcriptionists are based on a very specific kind of audio. If the footage you’re working with doesn’t involve phone calls — then which system is best? Audio is not one-size-fits-all: depending on whether footage has been recorded via a phone or professional mic, from two inches or twenty feet away, with or without accents, featuring two people or twelve — there are a lot of variables, and they can have a substantial impact on transcription accuracy.

That’s one reason Descript decided to run its own tests: we deal with so many different kinds of audio, it makes sense to test with a broader sample, and to get a sense for whether different ASR providers excel at different things.

Source