Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- Citing sources
- What Is an Annotated Bibliography? | Examples & Format

What Is an Annotated Bibliography? | Examples & Format
Published on March 9, 2021 by Jack Caulfield . Revised on August 23, 2022.
An annotated bibliography is a list of source references that includes a short descriptive text (an annotation) for each source. It may be assigned as part of the research process for a paper , or as an individual assignment to gather and read relevant sources on a topic.
Scribbr’s free Citation Generator allows you to easily create and manage your annotated bibliography in APA or MLA style. To generate a perfectly formatted annotated bibliography, select the source type, fill out the relevant fields, and add your annotation.
The Scribbr Citation Generator will automatically create a flawless APA citation
The Scribbr Citation Generator will automatically create a flawless MLA citation
An example of an annotated source is shown below:

Table of contents
Annotated bibliography format: apa, mla, chicago, how to write an annotated bibliography, descriptive annotation example, evaluative annotation example, reflective annotation example, finding sources for your annotated bibliography, frequently asked questions about annotated bibliographies.
Make sure your annotated bibliography is formatted according to the guidelines of the style guide you’re working with. Three common styles are covered below:
In APA Style , both the reference entry and the annotation should be double-spaced and left-aligned.
The reference entry itself should have a hanging indent . The annotation follows on the next line, and the whole annotation should be indented to match the hanging indent. The first line of any additional paragraphs should be indented an additional time.

In an MLA style annotated bibliography , the Works Cited entry and the annotation are both double-spaced and left-aligned.
The Works Cited entry has a hanging indent. The annotation itself is indented 1 inch (twice as far as the hanging indent). If there are two or more paragraphs in the annotation, the first line of each paragraph is indented an additional half-inch, but not if there is only one paragraph.

Chicago style
In a Chicago style annotated bibliography , the bibliography entry itself should be single-spaced and feature a hanging indent.
The annotation should be indented, double-spaced, and left-aligned. The first line of any additional paragraphs should be indented an additional time.

For each source, start by writing (or generating ) a full reference entry that gives the author, title, date, and other information. The annotated bibliography format varies based on the citation style you’re using.
The annotations themselves are usually between 50 and 200 words in length, typically formatted as a single paragraph. This can vary depending on the word count of the assignment, the relative length and importance of different sources, and the number of sources you include.
Consider the instructions you’ve been given or consult your instructor to determine what kind of annotations they’re looking for:
- Descriptive annotations : When the assignment is just about gathering and summarizing information, focus on the key arguments and methods of each source.
- Evaluative annotations : When the assignment is about evaluating the sources , you should also assess the validity and effectiveness of these arguments and methods.
- Reflective annotations : When the assignment is part of a larger research process, you need to consider the relevance and usefulness of the sources to your own research.
These specific terms won’t necessarily be used. The important thing is to understand the purpose of your assignment and pick the approach that matches it best. Interactive examples of the different styles of annotation are shown below.
Here's why students love Scribbr's proofreading services
Discover proofreading & editing
A descriptive annotation summarizes the approach and arguments of a source in an objective way, without attempting to assess their validity.
In this way, it resembles an abstract , but you should never just copy text from a source’s abstract, as this would be considered plagiarism . You’ll naturally cover similar ground, but you should also consider whether the abstract omits any important points from the full text.
The interactive example shown below describes an article about the relationship between business regulations and CO 2 emissions.
Rieger, A. (2019). Doing business and increasing emissions? An exploratory analysis of the impact of business regulation on CO 2 emissions. Human Ecology Review , 25 (1), 69–86. https://www.jstor.org/stable/26964340
An evaluative annotation also describes the content of a source, but it goes on to evaluate elements like the validity of the source’s arguments and the appropriateness of its methods .
For example, the following annotation describes, and evaluates the effectiveness of, a book about the history of Western philosophy.
Kenny, A. (2010). A new history of Western philosophy: In four parts . Oxford University Press.
A reflective annotation is similar to an evaluative one, but it focuses on the source’s usefulness or relevance to your own research.
Reflective annotations are often required when the point is to gather sources for a future research project, or to assess how they were used in a project you already completed.
The annotation below assesses the usefulness of a particular article for the author’s own research in the field of media studies.
Manovich, Lev. (2009). The practice of everyday (media) life: From mass consumption to mass cultural production? Critical Inquiry , 35 (2), 319–331. https://www.jstor.org/stable/10.1086/596645
Manovich’s article assesses the shift from a consumption-based media culture (in which media content is produced by a small number of professionals and consumed by a mass audience) to a production-based media culture (in which this mass audience is just as active in producing content as in consuming it). He is skeptical of some of the claims made about this cultural shift; specifically, he argues that the shift towards user-made content must be regarded as more reliant upon commercial media production than it is typically acknowledged to be. However, he regards web 2.0 as an exciting ongoing development for art and media production, citing its innovation and unpredictability.
The article is outdated in certain ways (it dates from 2009, before the launch of Instagram, to give just one example). Nevertheless, its critical engagement with the possibilities opened up for media production by the growth of social media is valuable in a general sense, and its conceptualization of these changes frequently applies just as well to more current social media platforms as it does to Myspace. Conceptually, I intend to draw on this article in my own analysis of the social dynamics of Twitter and Instagram.
Before you can write your annotations, you’ll need to find sources . If the annotated bibliography is part of the research process for a paper, your sources will be those you consult and cite as you prepare the paper. Otherwise, your assignment and your choice of topic will guide you in what kind of sources to look for.
Make sure that you’ve clearly defined your topic , and then consider what keywords are relevant to it, including variants of the terms. Use these keywords to search databases (e.g., Google Scholar ), using Boolean operators to refine your search.
Sources can include journal articles, books, and other source types , depending on the scope of the assignment. Read the abstracts or blurbs of the sources you find to see whether they’re relevant, and try exploring their bibliographies to discover more. If a particular source keeps showing up, it’s probably important.
Once you’ve selected an appropriate range of sources, read through them, taking notes that you can use to build up your annotations. You may even prefer to write your annotations as you go, while each source is fresh in your mind.
An annotated bibliography is an assignment where you collect sources on a specific topic and write an annotation for each source. An annotation is a short text that describes and sometimes evaluates the source.
Any credible sources on your topic can be included in an annotated bibliography . The exact sources you cover will vary depending on the assignment, but you should usually focus on collecting journal articles and scholarly books . When in doubt, utilize the CRAAP test !
Each annotation in an annotated bibliography is usually between 50 and 200 words long. Longer annotations may be divided into paragraphs .
The content of the annotation varies according to your assignment. An annotation can be descriptive, meaning it just describes the source objectively; evaluative, meaning it assesses its usefulness; or reflective, meaning it explains how the source will be used in your own research .
A source annotation in an annotated bibliography fulfills a similar purpose to an abstract : they’re both intended to summarize the approach and key points of a source.
However, an annotation may also evaluate the source , discussing the validity and effectiveness of its arguments. Even if your annotation is purely descriptive , you may have a different perspective on the source from the author and highlight different key points.
You should never just copy text from the abstract for your annotation, as doing so constitutes plagiarism .
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Caulfield, J. (2022, August 23). What Is an Annotated Bibliography? | Examples & Format. Scribbr. Retrieved May 24, 2023, from https://www.scribbr.com/citing-sources/annotated-bibliography/
Is this article helpful?

Jack Caulfield
Other students also liked, evaluating sources | methods & examples, how to find sources | scholarly articles, books, etc., hanging indent | word & google docs instructions, what is your plagiarism score.
- TheFreeDictionary
- Word / Article
- Starts with
- Free toolbar & extensions
- Word of the Day
- Free content
an·no·ta·tion
An•no•ta•tion.
- acknowledgment
- expanding upon
- anniversary day
- Anniversary week
- anno Domini
- anno Hebraico
- anno Hegirae
- anno urbis conditae
- annomination
- Annona cherimola
- Annona diversifolia
- Annona glabra
- Annona muricata
- Annona reticulata
- Annona squamosa
- annotated print
- Annotationist
- announcement
- Annual epact
- annual fern
- annual general meeting
- annual parallax
- annual percentage rate
- ANNotated Ada
- Annotated Case Report Form
- annotated diagram
- annotated DNA sequence
- Annotated Electrocardiogram
- Annotated Error Guide
- annotated genome sequence
- annotated genomic sequence
- Annotated Grateful Dead Lyrics
- Annotated Instructor's Edition
- Annotated Labeled Transition System
- Annotated Multiple Choice Questions
- Annotated Outline
- annotated photograph
- Annotated Points-to Escape
- Annotated Pratchett File
- Annotated Reference Manual
- Annotated Revised Code of Washington
- Annotated Web Link
- Annotated XML Schema
- Annotation Creation for Your Media
- Annotation DataSet Record
- Annotation Operator Graph
- annotation overprint
- Annotation Processing Tool
- annotation text
- Annotation, Interpretation and Management of Mutations
- Annotations
- Annotations of an Autopsy
- Annotations-in-Context
- announce (something) to
- announce (something) to (someone or something)
- Facebook Share

Search form
Annotating mockups & wireframes for accessibility, why annotate.
The university is required to produce IT that everyone in our community can use equally. By “shifting left” and ensuring that we take accessibility into account at the design stage of web and app projects, we facilitate compliance at the development stage. This results in a process that is:
Cheaper . It is less resource intensive to flag possible accessibility issues at the design stage. The alternative is having these issues crop up during development, or even worse, after development.
More efficient . Developers will appreciate the specificity of the annotations and be able to produce the IT much faster.
Educational . Accessibility is everyone’s job. UX practitioners should know that their designs have accessibility implications and what these are. Developers need to know how to produce accessible IT. Annotating designs bridges the gap between both. UX practitioners learn to specify how the design needs to be implemented to be accessible. In time, developers learn how to do so even in the absence of the annotations.
What to annotate
Though annotating designs is a very useful practice, there is a large percentage of the accessibility issues that cannot be annotated. Be sure developers know this and have them incorporate testing as they develop in order to catch issues that have not been accounted for.
Color and contrast
Color should not be used as the sole flag of meaning . Solution: Either change the design or annotate the design to direct the developer to add other markers (text, weight, decoration, etc).
Contrast between foreground and background of text, controls and graphics should meet specific ratios . If you spot anything that might be a contrast problem, it probably is. Solution: Annotate the design to ensure the developer has the information needed to meet this requirement. WebAIM has a good tool to help you analyze contrast, as well as a primer of the guidelines and the contrast ratios to be achieved.
Structure and meaning
To the naked eye, the design pretty much lays out the structure of the view. But we are also required to offer this structure to someone who cannot see and who is using screen reader software to navigate and read the IT. There are two complementary methods of achieving this:
Follow proper heading structure . The view needs to have a visual title. Depending on the visual structure, other subtitles may be necessary. In your annotations the main title should be an <h1>, subsequent titles will be <h2>, <h3>....<h6>. There should be no heading jumps or gaps. Solution: specify the heading levels in your design annotations.
Label the landmark regions of a view . Mentally slice and dice the view into semantic chunks by the intended function of the chunk. These could be something like branding, menu, footer, main content, sidebar, subsection, etc. All of these can be expressed semantically in the markup and it is your charge to help the developer to do so. These are the annotations they will need:
Solution: visually annotate the regions' boundaries and provide the correct label for each.
Image descriptions
Images must have appropriate alternative text . If the image is meaningful, annotate it with alt=”the meaning of the image,” and if it is not meaningful with alt=””. Solution: Provide in design annotations the specific alternative text (e.g. 'Alt text="Universal design symbol"') for each image.
Text of links must meaningfully describe the destination . All links need to clearly and textually describe their function and also be unique within the view.
UI control text
User interface controls must also be labeled with text that meaningfully describes the function and is unique within the view.
Other semantic considerations
In general, all chunks that you can read visual meaning into need to have that meaning expressed in the markup; thus it is important to remind the developer of this. Here are some examples:
Text blocks:
If it looks like a paragraph it needs to be code as one (<p>)
If it looks like a list, it needs to be coded as one (<ul> for bullets or <ol> for numbers) - note: some things do not really look like a list, but should be one. Any sequence can be construed as a list, even if it is horizontal.
Annotate tables to ensure the developer uses semantics
A column heading row <tr><th scope=”col”>Header label</th>....</tr>
A row heading on the row cell that could be the title of the row <th scope=”row”>
If there are a lot of tables in the same view, ensure that they have names by annotating with caption=”name of the table”
There is one rule: if an element is in a form, it needs to be a labeled form element, programmatically associated with a form element or with the form itself. This applies to
Form element labels <label for=”id of form element”>
Informational text
Global instructions: either outside of the form or associated with it via <form aria-describedby=”id of global message container”>
Form element instructions <input aria-describedby=”id of information container”>
Error messages associated with form elements <input aria-describedby=”id of error message container”>
Form group titles: in complex forms, group like form elements with a title <fieldset><legend>Title of group</legend> …. Form elements (and their labels) </fieldset> This is required with radio and button groups, regardless of the complexity of the form.
Finally - if the design visually omits a form element label, annotate the form element with: aria-label=”name of this input”
Form element attributes:
Is the form element required? <input required>
Is it asking for something that the user has filled out already elsewhere? <input autocomplete=”on” name=”type of information needed”> - see complete list of possible values for later.
Is the input of a specific type? Is it asking for text, a number, a telephone, an email address, etc? Annotate with the corresponding type (full list in a Mozilla article about input types ): <input type=”email”>, <input type=”tel”>, <input type=”url”>
Some web pages will respond to user actions by changing the page without loading a new one. Users of screen readers will not perceive the change. Some strategies:
Add role=”alert” to alert the user to containers that have appeared
Add aria-live=”polite” to containers that change
Annotate with “pass focus to new input” if a user action has added a new form element
Evolve your annotation
Annotating designs for accessibility involves a bit of informed guess work and learning on the job. The more you do, the more secure you will be in your assumptions and the more on target will be your recommendations.
We have not scratched the surface of things that can be flagged in annotations to help developers do the right thing. The Figma Annotation plugin is actually a good learning tool, even if you do not use Figma.
If you are involved in reviewing the IT as it is being produced, a quick non-technical heuristic review will unearth many barriers that happen also to be the types of barriers that are really difficult to annotate against.
See these references for more discussion:
- Annotating designs for Accessibility (video) / Claire Webber and Sarah Pulis
- Top 5 Most Common Accessibility Annotations (article) / Deque
Tell us what you think
Your opinion is important to us. We use feedback to continually improve our website and resources to better meet your needs.
Provide Feedback
- MATLAB Answers
- Community Home
- File Exchange
- Communities
- Treasure Hunt
- Virtual Badges
- MATLAB FAQs
- Contributors
- Recent Activity
- Flagged Content
- Manage Spam
- Trial software
You are now following this question
- You will see updates in your followed content feed .
- You may receive emails, depending on your communication preferences .
Simulink port annotations do not appear with HDL definition of wire/reg

Direct link to this question
https://www.mathworks.com/matlabcentral/answers/1966209-simulink-port-annotations-do-not-appear-with-hdl-definition-of-wire-reg
0 Comments Show -1 older comments Hide -1 older comments
Sign in to comment.
Sign in to answer this question.
Accepted Answer

Direct link to this answer
https://www.mathworks.com/matlabcentral/answers/1966209-simulink-port-annotations-do-not-appear-with-hdl-definition-of-wire-reg#answer_1239029
- annot_comments.zip

2 Comments Show 1 older comment Hide 1 older comment

Direct link to this comment
https://www.mathworks.com/matlabcentral/answers/1966209-simulink-port-annotations-do-not-appear-with-hdl-definition-of-wire-reg#comment_2752719

https://www.mathworks.com/matlabcentral/answers/1966209-simulink-port-annotations-do-not-appear-with-hdl-definition-of-wire-reg#comment_2752749
More Answers (0)
- documentation
- annotations
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
- América Latina (Español)
- Canada (English)
- United States (English)
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 简体中文 Chinese
- 日本 Japanese (日本語)
- 한국 Korean (한국어)
Contact your local office
- More from M-W
- To save this word, you'll need to log in. Log In
Definition of annotate
intransitive verb
transitive verb
Example Sentences
These examples are programmatically compiled from various online sources to illustrate current usage of the word 'annotate.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.
Word History
Latin annotatus , past participle of annotare , from ad- + notare to mark — more at note
1693, in the meaning defined at transitive sense
Dictionary Entries Near annotate
Cite this entry.
“Annotate.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/annotate. Accessed 28 May. 2023.
Kids Definition
Kids definition of annotate, legal definition, legal definition of annotate, more from merriam-webster on annotate.
Thesaurus: All synonyms and antonyms for annotate
Nglish: Translation of annotate for Spanish Speakers
Britannica English: Translation of annotate for Arabic Speakers
Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Can you solve 4 words at once?
Word of the day.
See Definitions and Examples »
Get Word of the Day daily email!

You've used more than you might think

When 'thingamajig' and 'thingamabob' just won't do

Look up any year to find out

How 'literally' can mean 'figuratively'

A simple way to keep them apart. (Most of the time.)

And who put it there, anyway?

A simple trick to keep them separate

Test your visual vocabulary!
Take the quiz

Prove you're the best of the nest.

You know what it looks like… but what is it cal...

Can you outdo past winners of the National Spelli...
What Is an Annotation in Reading, Research, and Linguistics?
Deux / Getty Images
- An Introduction to Punctuation
- Ph.D., Rhetoric and English, University of Georgia
- M.A., Modern English and American Literature, University of Leicester
- B.A., English, State University of New York
An annotation is a note, comment, or concise statement of the key ideas in a text or a portion of a text and is commonly used in reading instruction and in research . In corpus linguistics , an annotation is a coded note or comment that identifies specific linguistic features of a word or sentence.
One of the most common uses of annotations is in essay composition, wherein a student might annotate a larger work he or she is referencing, pulling and compiling a list of quotes to form an argument. Long-form essays and term papers, as a result, often come with an annotated bibliography , which includes a list of references as well as brief summaries of the sources.
There are many ways to annotate a given text, identifying key components of the material by underlining, writing in the margins, listing cause-effect relationships, and noting confusing ideas with question marks beside the statement in the text.
Identifying Key Components of a Text
When conducting research, the process of annotation is almost essential to retaining the knowledge necessary to understand a text's key points and features and can be achieved through a number of means.
Jodi Patrick Holschuh and Lori Price Aultman describe a student's goal for annotating text in "Comprehension Development," wherein the students "are responsible for pulling out not only the main points of the text but also the other key information (e.g., examples and details) that they will need to rehearse for exams."
Holschuh and Aultman go on to describe the many ways a student may isolate key information from a given text, including writing brief summaries in the student's own words, listing out characteristics and cause-and-effect relations in the text, putting key information in graphics and charts, marking possible test questions, and underlining keywords or phrases or putting a question mark next to confusing concepts.
REAP: A Whole-Language Strategy
According to Eanet & Manzo's 1976 "Read-Encode-Annotate-Ponder" strategy for teaching students language and reading comprehension , annotation is a vital part of a students' ability to understand any given text comprehensively.
The process involves the following four steps: Read to discern the intent of the text or the writer's message; Encode the message into a form of self-expression, or write it out in student's own words; Analyze by writing this concept in a note; and Ponder or reflect on the note, either through introspection or discussing with peers.
Anthony V. Manzo and Ula Casale Manzo describe the notion in "Content Area Reading: A Heuristic Approach" as among the earliest strategies developed to stress the use of writing as a means of improving thinking and reading," wherein these annotations "serve as alternative perspectives from which to consider and evaluate information and ideas."
- 10 Strategies to Increase Student Reading Comprehension
- What Is a Written Summary?
- 5 Tips to Improve Reading Comprehension
- 7 Active Reading Strategies for Students
- How to Set Measurable, Achievable IEP Goals for Reading Comprehension
- Abstracting & Transcribing Genealogical Documents
- How and When to Paraphrase Quotations
- How to Read a Lot of Dry Text Quickly
- How to Find the Stated Main Idea
- Thinking About Reading
- Abstract Writing for Sociology
- An Introduction to Academic Writing
- 10 Ways to Maximize Your Study Time
- How to Keep a Reading Log or Book Journal
- Top Book Recommendations for Boys From Librarians
- How to Write a Great Book Report
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.

- Cambridge Dictionary +Plus
Meaning of annotate in English
Your browser doesn't support HTML5 audio
- The book's annotated bibliography fills 45 pages .
- You are allowed to bring annotated copies of the novel you have been studying into the exam .
- Any attached documentation should be annotated with explanatory notes for clarification .
- Students arrive at the lecture equipped with printed notes : all they have to do is to annotate these printouts .
- He annotates and indexes a page in his notebook .
- Typically I use this program to annotate a document with my own structured content .
- Annotated data has facilitated recent advances in part of speech tagging , parsing , and other language processing issues .
- dog whistle
- malediction
You can also find related words, phrases, and synonyms in the topics:
Want to learn more?
Improve your vocabulary with English Vocabulary in Use from Cambridge. Learn words you need to communicate with confidence.
Related words
Annotate | american dictionary, examples of annotate, translations of annotate.
Get a quick, free translation!
- {{randomImageQuizHook.copyright1}}
- {{randomImageQuizHook.copyright2}}
Word of the Day
a close, friendly, but not sexual relationship between two men

Stunned and thunderstruck (Words for being surprised or shocked)

Learn more with +Plus
- Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
- Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
- Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
- English–Dutch Dutch–English
- English–French French–English
- English–German German–English
- English–Indonesian Indonesian–English
- English–Italian Italian–English
- English–Japanese Japanese–English
- English–Norwegian Norwegian–English
- English–Polish Polish–English
- English–Portuguese Portuguese–English
- English–Spanish Spanish–English
- Dictionary +Plus Word Lists
- English Verb
- annotated bibliography
- Translations
- All translations
Add annotate to one of your lists below, or create a new one.
{{message}}
Something went wrong.
There was a problem sending your report.
- 1.1 Etymology
- 1.2 Pronunciation
- 1.3.1 Derived terms
- 1.3.2 Translations
- 2.1 Etymology
- 2.2 Pronunciation
- 2.3.1 Related terms
- 2.4 Further reading
English [ edit ]
Etymology [ edit ].
From Latin annotātiōnem , accusative singular of annotātiō ( “ remark, annotation ” ) , from annotātus , perfect passive participle of annotō ( “ note down, remark ” ) . Equivalent to annotate + -ion .
Pronunciation [ edit ]
- Rhymes: -eɪʃən
Noun [ edit ]
annotation ( countable and uncountable , plural annotations )
- A critical or explanatory commentary or analysis .
- A comment added to a text .
- The process of writing such comment or commentary.
- ( computing ) Metadata added to a document or program.
- ( genetics ) Information relating to the genetic structure of sequences of bases.

Derived terms [ edit ]
- back-annotation
Translations [ edit ]
French [ edit ].
From Latin annotātiōnem .
- IPA ( key ) : /a.nɔ.ta.sjɔ̃/
annotation f ( plural annotations )
Related terms [ edit ]
Further reading [ edit ].
- “ annotation ”, in Trésor de la langue française informatisé [ Digitized Treasury of the French Language ] , 2012.
Definition of 'annotation'

annotation in British English
Annotation in american english, examples of 'annotation' in a sentence annotation, trends of annotation.
View usage for: All Years Last 10 years Last 50 years Last 100 years Last 300 years
Browse alphabetically annotation
- annotate text
- announce a ban
- All ENGLISH words that begin with 'A'
Related terms of annotation
- gene annotation
- genome annotation
- functional annotation
Quick word challenge
Quiz Review
Score: 0 / 5

Wordle Helper

- Top Definitions
- Related Content
- More About Annotate
- New Word List
Origin of annotate
Other words from annotate, words nearby annotate, more about annotate, what does annotate mean.
To annotate is to add notes or comments to a text or something similar to provide explanation or criticism about a particular part of it.
Such notes or comments are called annotations . Annotation can also refer to the act of annotating.
Annotations are often added to scholarly articles or to literary works that are being analyzed. But any text can be annotated. For example, a note that you scribble in the margin of your textbook is an annotation, as is an explanatory comment that you add to a list of tasks at work.
Something that has had such notes added to it can be described with the adjective annotated , as in This is the annotated edition of the book.
Example: I like to annotate the books I’m reading by writing my thoughts in the margins.
Where does annotate come from?
The first records of the word annotate come from the 1700s. ( Annotation is recorded much earlier, in the 1400s.) Annotate derives from the Latin annotātus, which means “noted down” and comes from the Latin verb annotāre. At the root of the word is the Latin nota, which means “mark” and is also the basis of the English word note.
Typically, text is annotated in order to add explanation, criticism, analysis, or historical perspective. The word can be used in more specific ways in different contexts. In an annotated bibliography , each citation is annotated with a summary or other information. In computer programming, strings of code can be annotated with explanatory notes. In genomics , gene sequences can be annotated with interpretations of genes and their possible functions. In all cases, the word refers to adding some kind of extra information to an existing thing.
Did you know ... ?
What are some other forms related to annotate ?
- annotation (noun)
- annotated (past tense verb, adjective)
- annotative (adjective)
- annotatory (adjective)
- annotator (noun)
- reannotate (verb)
What are some synonyms for annotate ?
What are some words that share a root or word element with annotate ?
What are some words that often get used in discussing annotate ?
- explanation
How is annotate used in real life?
Annotate is most commonly used in the context of academic and literary works.
every time i annotate a book i think about how beautiful the reading experience will be for the next person but nobody ever borrows books from me so 😔 https://t.co/Uktg1BLIUo — mahnoor (@mahnewr_) July 26, 2020
When I read books, I analyze them completely. I annotate, I take notes—I try to communicate with the author. Doing this has shown dramatic improvement in retaining the information I read from the books. I also listen to ASMR ambiences, whether a crackling fireplace or a library. pic.twitter.com/Y4s69lHESb — Brice van der Post (@bricevdp) July 26, 2020
Human workers are needed to prepare data for #AI , annotate the datasets used to train AI models, monitor the performance of these models — and correct inaccurate predictions. Namrata Yadav examines. https://t.co/IXieNmjfkD — ORF (@orfonline) July 30, 2020
Try using annotate !
Which of the following things can be annotated ?
A. a classic novel B. a scholarly article C. a grocery list D. all of the above
Words related to annotate
How to use annotate in a sentence.
An AI trained to recognize cancer from a slew of medical scans, annotate d in yellow marker by a human doctor, could learn to associate “yellow” with “cancer.”
To make any sense of these images, and in turn, what the brain is doing, the parts of neurons have to be annotate d in three dimensions, the result of which is a wiring diagram.
This kind of labeling and reconstruction is necessary to make sense of the vast datasets in connectomics, and have traditionally required armies of undergraduate students or citizen scientists to manually annotate all chunks.
Once a video is annotate d with a topic, it is associated with IAB’s categories to be monetized.
You should annotate your reports to document these indexing bugs during the month of September through October 14th.
The latest $400 model has a reading light and a touch screen that allows you to annotate while reading.
Madame Beattie threw back her plumed head and laughed, the same laugh she had used to annotate the stories.
He read industriously for some time, occasionally pausing to annotate ; and once or twice he raised his head and listened.
He would annotate three hundred volumes for a page of facts.
To annotate it in detail would be to spoil its completeness.
His curiosity turning to admiration, he began to translate and annotate the most striking treatises that fell into his hands.
British Dictionary definitions for annotate
Derived forms of annotate, word origin for annotate.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- My Account Login
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
Similar articles being viewed by others
Slider with three articles shown per slide. Use the Previous and Next buttons to navigate the slides or the slide controller buttons at the end to navigate through each slide.

Decoding personalized motor cortical excitability states from human electroencephalography
15 April 2022
Sara J. Hussain & Romain Quentin

Motor evoked potentials for multiple sclerosis, a multiyear follow-up dataset
16 May 2022
Jan Yperman, Veronica Popescu, … Liesbet M. Peeters

A framework to assess the impact of number of trials on the amplitude of motor evoked potentials
08 December 2020
Claudia Ammann, Pasqualina Guida, … Guglielmo Foffani

A neurophysiologically interpretable deep neural network predicts complex movement components from brain activity
20 January 2022
Neelesh Kumar & Konstantinos P. Michmizos

A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces
16 October 2018
Murat Kaya, Mustafa Kemal Binli, … Yuriy Mishchenko

Precise motor mapping with transcranial magnetic stimulation
02 December 2022
Konstantin Weise, Ole Numssen, … Thomas R. Knösche

Classification of electrically-evoked potentials in the parkinsonian subthalamic nucleus region
15 February 2023
Joshua Rosing, Alex Doyle, … Matthew D. Johnson

Attempted Arm and Hand Movements can be Decoded from Low-Frequency EEG from Persons with Spinal Cord Injury
09 May 2019
Patrick Ofner, Andreas Schwarz, … Gernot R. Müller-Putz

Landscape and future directions of machine learning applications in closed-loop brain stimulation
27 April 2023
Anirudha S. Chandrabhatla, I. Jonathan Pomeraniec, … Alexander Ksendzovsky
- Open Access
- Published: 22 May 2023
DELMEP: a deep learning algorithm for automated annotation of motor evoked potential latencies
- Diego Milardovich ORCID: orcid.org/0000-0003-2453-1693 1 , 2 , 3 na1 ,
- Victor H. Souza ORCID: orcid.org/0000-0002-0254-4322 2 , 3 , 4 na1 ,
- Ivan Zubarev ORCID: orcid.org/0000-0002-1620-8485 2 ,
- Sergei Tugin ORCID: orcid.org/0000-0002-1274-8863 2 , 3 , 5 , 6 ,
- Jaakko O. Nieminen ORCID: orcid.org/0000-0002-7826-3519 2 , 3 ,
- Claudia Bigoni ORCID: orcid.org/0000-0002-5142-5434 7 , 8 ,
- Friedhelm C. Hummel ORCID: orcid.org/0000-0002-4746-4633 7 , 8 , 9 ,
- Juuso T. Korhonen ORCID: orcid.org/0000-0001-7802-7084 2 ,
- Dogu B. Aydogan ORCID: orcid.org/0000-0002-7840-3294 2 , 10 ,
- Pantelis Lioumis ORCID: orcid.org/0000-0003-2016-9199 2 , 3 ,
- Nima Taherinejad ORCID: orcid.org/0000-0002-1295-0332 11 , 12 ,
- Tibor Grasser ORCID: orcid.org/0000-0001-6536-2238 1 &
- Risto J. Ilmoniemi ORCID: orcid.org/0000-0002-3340-2618 2 , 3
Scientific Reports volume 13 , Article number: 8225 ( 2023 ) Cite this article
443 Accesses
7 Altmetric
Metrics details
- Data processing
- Neuroscience
The analysis of motor evoked potentials (MEPs) generated by transcranial magnetic stimulation (TMS) is crucial in research and clinical medical practice. MEPs are characterized by their latency and the treatment of a single patient may require the characterization of thousands of MEPs. Given the difficulty of developing reliable and accurate algorithms, currently the assessment of MEPs is performed with visual inspection and manual annotation by a medical expert; making it a time-consuming, inaccurate, and error-prone process. In this study, we developed DELMEP, a deep learning-based algorithm to automate the estimation of MEP latency. Our algorithm resulted in a mean absolute error of about 0.5 ms and an accuracy that was practically independent of the MEP amplitude. The low computational cost of the DELMEP algorithm allows employing it in on-the-fly characterization of MEPs for brain-state-dependent and closed-loop brain stimulation protocols. Moreover, its learning ability makes it a particularly promising option for artificial-intelligence-based personalized clinical applications.
Introduction
The motor evoked potential (MEP) generated by transcranial magnetic stimulation (TMS) is a crucial neurophysiological signal in research and clinical practice. MEP amplitude and latency allow us to assess quantitatively the corticospinal excitability. This is necessary to evaluate patients undergoing surgery and to monitor neuromotor diseases, such as the progression of multiple sclerosis 1 , the recovery of stroke patients 2 and idiopathic generalized epilepsy patients 3 , among a wide range of medical applications. MEPs are commonly characterized by their latency, which is defined as the time elapsed between the stimulation and the onset of the MEP (Fig. 1 ). The MEP latency is usually annotated manually after visual inspection of the electromyography (EMG) recording, making the process time-consuming, operator-dependent, and prone to errors 4 , 5 . An algorithm to automate the characterization of MEPs would not only save time and reduce human errors, but would also boost the development of brain-state-dependent and closed-loop brain stimulation protocols by allowing accurate real-time MEP assessment 6 .

Two different MEP waveforms (red and blue curves) with similar latencies (black vertical line), defined as the time elapsed between the TMS and the beginning of the MEP trace. The epoch starts at the time the TMS pulse is delivered.
Several attempts have been made to develop algorithms to automate the MEP latency annotation. These algorithms are based either on absolute hard threshold estimation (AHTE) 7 or on statistical measures 8 , 9 . A review and comparison of previous methods is presented by Šoda et al. 10 , together with their own algorithm named Squared Hard Threshold Estimator (SHTE). In general, the previously presented algorithms require the user to specify a set of so-called magic numbers . These are hyperparameters with a large impact on the algorithm performance, which are empirically derived and depend on the user's knowledge and experience 10 . On the other hand, Bigoni’s method 11 , which is a derivative-based algorithm, does not require the user to specify magic numbers.
Developing an algorithm to automate the annotation of MEPs is not trivial. Even in ideal conditions of high signal-to-noise ratio (SNR), MEPs are highly variable, presenting significant inter- and within-subject amplitude variability 12 , 13 , 14 . Similarly, the MEP latency variability is well known and has been previously documented for neurosurgical patients 12 , 15 . Under similar circumstances, the amplitude of two MEPs can differ up to an order of magnitude and present totally different shapes, as shown in Fig. 1 . Furthermore, low-amplitude MEPs, commonly recorded in inhibitory stimulation paradigms with paired-pulse TMS have inherently lower SNR than high-amplitude MEPs, thus adding a new layer of complexity. Overall, these factors make it demanding to assess the MEP latency automatically and reliably.
In this context, machine-learning-based algorithms, particularly those employing deep learning techniques, offer a promising approach to provide an accurate and reliable solution. The MEP latency annotation is a pattern recognition problem, where deep learning methods have already demonstrated their potential 16 . In this work, we present our DELMEP algorithm, which relies on deep learning for automated MEP latency annotation. We contend that DELMEP will speed up data analysis procedures and facilitate the development of closed-loop brain stimulation protocols, as well as the development of personalized medical solutions. To the best of our knowledge, this is the first deep learning solution to the problem of automating the MEP latency estimation.
Material and methods
Mep dataset.
We utilized a dataset collected from 9 healthy volunteers (3 women and 6 men, mean age: 30 years, range 24–41) for two studies 17 , 18 , which describe the detailed experimental protocol and stimulation paradigms. Experiments were performed in accordance with the Declaration of Helsinki and approved by the Coordinating Ethics Committee of the Hospital District of Helsinki and Uusimaa. All participants gave written informed consent before their participation.
TMS was applied with a monophasic trapezoidal waveform by our custom-made multi-channel TMS (mTMS) power electronics 19 connected to a 2-coil transducer capable of electronically rotating the peak induced electric field 17 . EMG signals were digitized using an eXimia EMG 3.2 system (Nexstim Plc, Finland; sampling frequency 3 kHz; 10–500 Hz band-pass filter). MEPs were collected with single-pulse and paired-pulse paradigms. The paired-pulse stimuli were delivered with interstimulus intervals of 0.5 and 1.5 ms (short-interval intracortical inhibition, low-amplitude MEPs) and 6.0 and 8.0 ms (intracortical facilitation, high-amplitude MEPs). The conditioning stimulus intensity was 80% of resting motor threshold and the test stimulus and single pulse intensity were both 110% of resting motor threshold. MEPs were recorded from the abductor pollicis brevis, abductor digiti minimi and first dorsal interosseous muscles. EMG recordings showing muscle pre-activation or movement artifacts greater than ± 15 µV within 1 s before the TMS pulse were removed from the analysis. The raw MEPs were visually inspected, and the latency was manually annotated by a single expert (doctoral candidate; 7 years of experience) and quality-checked by a second expert (postdoctoral researcher; 10 years of experience) who confirmed the latency annotations. We note that the aforementioned experts are co-authors of this study. However, the dataset was collected and the latencies were annotated for two prior studies 17 , 18 conducted before the conceptualization and development of DELMEP. Therefore, the annotations were performed independently of the development of the DELMEP algorithm. We performed an additional validation on an external MEP dataset annotated by three experts. From the three experts, one is a co-author of the present study. Data preprocessing and annotation was performed with custom-made scripts written in MATLAB R2017a (MathWorks Inc, USA). A total of 33,060 MEPs were recorded, i.e., 11,020 from each muscle group (abductor pollicis brevis, abductor digiti minimi, and first dorsal interosseous). From all MEPs, 232 (0.7%) were discarded because of pre-activation and 11,548 (34.9%) were discarded because of noise or no-response. Out of the remaining 21,244 MEPs, the validator discarded 4569 (21.5%) and approved 16,675 (78.5%). Therefore, in total, the dataset is composed of 16,675 MEPs and their peak-to-peak amplitudes and latencies.
Latency estimation algorithm
To automate the MEP latency assessment, we developed an algorithm named DELMEP, written in Python 3.8 and available at https://github.com/connect2brain/delmep . The DELMEP pipeline is composed of the following steps (Fig. 2 ): (1) pre-processing and (2) latency estimation with a neural network. We present the details of each step below.

Workflow for automated assessment of MEP latencies in a possible closed-loop TMS set-up. MEPs are measured with electrodes placed on the target muscle and stored in a 120-dimension vector. The pre-processing is done by trimming, smoothing, centering, and normalizing the MEP. The resulting vector is used as an input to the neural network for the latency estimation. The dashed arrows show how our DELMEP algorithm could be applied to a closed-loop protocol (dashed box), in which the brain stimulation parameters are modified depending on the MEP responses.
Pre-processing (step 1): The pre-processing simplifies the training and use of our neural network. Without the pre-processing, the high variability and noise of the MEPs would require that the neural network “learns” different inputs (MEP traces) corresponding to similar outputs (latencies). Here, the MEPs are represented by a mathematical vector of the raw voltage measurements, significantly reducing the complexity and increasing the speed of deep learning algorithms necessary to process the data. Hence, in this step, the data are (1) trimmed, (2) smoothed, (3) centered, and (4) amplitude normalized. The MEPs are trimmed from 10 to 50 ms after the TMS (120 samples). This is done to reduce and standardize their length, because in the resting condition, the measurements shortly after the TMS and much later than the end of the MEP do not carry relevant information. On the contrary, their inherent noise could pose a problem to the training and use of the neural network, since it would unnecessarily increase the dimensions of the input vectors.
After trimming, the MEPs are smoothed with a moving average filter with a window length of 3 samples, to reduce the high-frequency noise of the recordings 20 . Next, the MEPs are centered by computing their mean value in the first 15 samples (5 ms) and then subtracting it from every sample. This step reduces the impact of low-frequency noise in the measurements, by counteracting the shifting it produces in the mean MEP value. The window length in the smoothing step and the time window in the centering step were tuned employing a grid search algorithm. Lastly, we normalize the MEPs so that their minimum and maximum values correspond to 0 and 1, respectively, to mitigate the effects of the large variations in amplitude. A detailed representation of this preprocessing is illustrated in Fig. 3 , where the changes on the two MEPs presented in Fig. 1 are shown.

( a ) Pre-processing of the two MEPs shown in Fig. 1 , divided into: trimming (1), smoothing (2), centering (3) and normalizing (4). Panels ( b ) and ( c ) show the MEPs before and after the pre-processing, respectively. When comparing ( b ) and ( c ), note that the different MEPs look similar after the pre-processing, thus facilitating the training and later use of the neural network.
Deep learning algorithm (step 2): The pre-processed MEPs are used as inputs to the neural network, which produces a latency prediction as its output. This neural network is built as a multi-layer fully connected perceptron layout with two hidden layers of 30 artificial neurons each, and an output layer. We used the rectified linear unit activation function and trained the network with the Adam optimizer 21 (early stopping criteria: 200 epochs; batch size: 32), as implemented in the software package Keras 2.4.3 22 .
Data analysis and method validation
From all MEPs, 2113 (13%) are low amplitude (peak-to-peak amplitude ( V PP ) ≤ 100 µV), 2995 (18%) are medium amplitude (100 µV < V PP ≤ 200 µV) and 11,565 (69%) are high amplitude ( V PP > 200 µV). The MEPs were first divided randomly into a training (13,340 MEPs) and testing (3335 MEPs) dataset, with a training/testing ratio of 80/20. We verified the accuracy and repeatability of our method by using it to evaluate the latency of the 13,340 and 3335 MEPs in the training and testing datasets, respectively, and comparing these results with the manual assessment of the expert. The comparison was made by computing the mean absolute error (MAE) between the latencies provided by our method and those provided by the expert. We analyzed the latency prediction error by computing the correlation of the automated latency estimate with the two main MEP features: V PP and the manually annotated latency. We also estimated the computational times for the DELMEP algorithm using a standard desktop computer (CPU Intel Core i7-5650U 2.2 GHz and 8 GB of RAM).
For comparison, we used the following algorithms: Signal Hunter 9 , AHTE 10 , SHTE 10 and Bigoni’s method 11 to estimate the latency of the MEPs in the testing dataset. These estimations were then compared to the manual annotations of the expert and the MAE was computed for every method. Signal Hunter is an open source software for MEP analysis having a latency estimation algorithm based on statistical measures, it performs a moving average filtering on the MEP, differentiates the smoothed signal, calculates the standard deviation (SD), and finds the index value for which the difference between the absolute differentiated MEP value and the SD is the largest; thereafter, it estimates the MEP latency by subtracting a user-selected magic number from that index value. We implemented Signal Hunter with a magic number equal to 5, following the author’s original implementation 9 . The AHTE algorithm performs an absolute value operation on the MEP, finds its maximum amplitude and determines the threshold value ( V thr ), marks ± 10% around the mean value of the MEP, and finds the index value where the marked line is crossed by the MEP for the first time. The latency estimation is obtained by subtracting a user-selected magic number from that index value. The SHTE algorithm is based on the same principle as the AHTE algorithm, but it works by squaring the MEP coefficients, instead of performing an absolute value operation. We implemented the AHTE and SHTE algorithms ( V thr = 10% and magic number = 5) as done in 10 . Bigoni’s method is a derivative-based method, it reduces the MEP to a window of 10–50 ms after the stimulation, finds the peak and trough of the MEP, performs an absolute value operation, computes the approximate first derivative of the MEP until the peak, finds the longest vector of consecutive samples having a positive derivative, and estimates the latency as the first sample of this vector. All algorithms were implemented in Python 3.8.
To evaluate the generalizability of our DELMEP algorithm, we performed a cross-validation both within and across subjects. In a within-subject test, the data of each subject was split in 5 folds; 80% of the MEPs were used as training dataset and the remaining 20% as the validation set, interchangeably. Final results were obtained by computing the average MAE and SD of MAE across folds for each subject separately. The inter-subject variability of our DELMEP algorithm was tested using a leave-one-subject-out approach. In this test, the data from all but one subject was used to train the model and the MAE was estimated using the data from the left-out subject. This was repeated for all subjects and the MAE was computed in every iteration.
An additional validation was performed in which our DELMEP algorithm and Bigoni’s method were used to estimate the latency of the MEPs in an independent dataset, which is composed of 1561 MEPs and described in detail in the study by Bigoni et al. 11 . This dataset was collected from 16 healthy volunteers (eight women and eight men; age: 26.7 ± 2.6 years). The latencies were manually annotated by three different experts (with 0.5, 5 and 14 years of experience) who did not take part in the development of DELMEP. For validating DELMEP, we computed the ground truth (GT) latency as the mean value from the three annotations. About 99% of the MEPs in this dataset have a high amplitude ( V PP > 100 µV), as these MEPs were collected using a single-pulse paradigm, with a test-intensity chosen to produce MEP amplitude of 0.50 mV.
Training the neural network on a dataset of 13,340 MEPs required about 2 min and pre-processing an MEP trace required 1.2 ms. On average, annotating a pre-processed MEP required 65 µs. The estimated MEP latencies by the DELMEP algorithm and corresponding MAE, for the testing and training datasets are illustrated in Fig. 4 . The similarity between the automated DELMEP and the manual expert annotation suggests a successful training process, since the MEPs in the testing dataset were not used to train the neural network.

Automated MEP latency annotations with the proposed DELMEP algorithm in the training dataset (green) and testing dataset (orange). The results are compared to the manually assessed values. The MAE is presented for both datasets.
To provide a practical example of the DELMEP performance, Fig. 5 shows eight MEPs and their corresponding automated and manually annotated latencies. Although such a small sample can only be considered as an illustrative example, it provides a notion of how the proposed algorithm performs when used to replace a human expert in MEP annotations.

Illustrative MEPs from the testing dataset and their corresponding automated (dashed violet vertical line) and manually assessed (purple vertical line) latencies. These MEPs were not used to train the neural network. The similarity between both latencies indicates the successful performance of our DELMEP algorithm.
Figure 6 illustrates the error associated with the DELMEP algorithm for the corresponding MEP V PP and manually annotated latency. The sub-panel shows the correlation between the DELMEP latency estimation error and the MEP V PP , for MEPs with an estimation error equal or higher than 1 ms. From the 3,335 MEPs assessed by DELMEP in the testing dataset, 1,895 (57%) had an error lower than 0.5 ms and 2924 (88%) had an error lower than 1 ms, making it useful not just in research but also in clinical medical practice.

Map of the DELMEP algorithm errors in estimating MEP latencies, as a function of the MEP V PP amplitude and manually annotated latency. The upper-right panel shows the DELMEP estimation error versus the MEP V PP amplitude, for MEPs with latency estimation error higher than 1 ms.
The results from the validation comparison between DELMEP and Signal Hunter, AHTE, SHTE, and Bigoni’s method can be found in Table 1 , where the MAE is reported for the entire testing dataset and also divided between high- and low-amplitude MEPs. It is important to notice that Bigoni’s method discards MEPs when it is not able to find a long enough vector of samples with positive derivatives. The minimum number of samples with positive derivatives in our implementation was set to five, following the original author’s implementation 11 . This resulted in 483 out of 3335 MEPs in the testing dataset (15%), most of which corresponded to low-amplitude MEPs, being discarded by Bigoni’s method. To make a direct comparison, only the remaining MEPs were considered to compute the MAE of every method. However, we note that for the cross-validation, in Table 2 , the entire dataset was used.
The resulting MAE from the five-fold cross-validation when using each batch for testing is reported in Table 2 , together with the average MAE for all tests and its SD.
The intra-subject variability was analyzed by computing a five-fold cross-validation using data from one subject at a time, and repeating the process for all subjects. The MAE for each data batch of each subject is reported in Table 3 , together with the average and SD for all tests. Furthermore, the correlation between error and dataset size for every subject is shown in Fig. 7 , together with a fitted curve.

Relation between the average MAE obtained during the intra-subject cross-validation tests and the dataset size of every subject (number of MEPs). Each data point represents a different subject in the dataset.
The inter-subject variability was analyzed by using the data of one of the subjects for testing and the data from the remaining eight subjects for training; this process was repeated for each subject. The MAE for each subject together with the average and SD for all tests are reported in Table 4 .
Figure 8 shows the results for the additional validation using the dataset from Bigoni et al. 11 . The MAE from DELMEP was 0.7 ms, while that of Bigoni’s method was 0.4 ms. In the case of DELMEP, the highest errors correspond to high-latency MEPs (above 28 ms).

Latencies annotated with DELMEP and Bignoni’s method vs manually annotated latencies for the MEPs in Bigoni’s dataset. The manually annotated latencies are the mean value of the annotations of three independent experts.
Our DELMEP algorithm performed better than traditional hard-threshold based algorithms across different MEP amplitude ranges. This improved performance is especially valuable for low-amplitude MEPs, commonly recorded at low stimulation intensities, when computing the motor threshold and in inhibitory paired-pulse paradigms 18 , 23 , 24 . For example, with the same low-amplitude MEPs ( V PP ≤ 100 µV) in the testing dataset, DELMEP, Bigoni’s method, SHTE, AHTE and Signal Hunter yielded an MAE of 0.6, 1.0, 7.3, 16.5 and 22.9 ms, respectively; with DELMEP being about one order of magnitude more accurate than these algorithms. On the other hand, with the same high-amplitude MEPs ( V PP > 100 µV) in the testing dataset, DELMEP, Bigoni’s method, SHTE, AHTE and Signal Hunter yielded an MAE of 0.5, 0.8, 1.3, 2.8 and 6.3 ms, respectively. This is possibly due to the consistent accuracy of our DELMEP algorithm regardless of the MEP amplitude. Such higher prediction errors correlated with lower MEP amplitudes can be explained by the inherently lower SNR, which has a stronger effect on methods relying on hard-threshold estimators 10 .
When tested in our larger dataset (Table 1 ), the automated annotation by DELMEP was on average 0.3 ms more accurate than the state-of-the-art derivative-based Bigoni’s method. Both DELMEP and Bigoni’s method retain their accuracy on low-amplitude MEPs, an important feature which is out of reach for all previously tested algorithms. When applied to Bigoni’s dataset (Fig. 8 ), DELMEP shows a slightly larger error than Bigoni’s method (0.7 ms and 0.4 ms, respectively). We should note that Bigoni’s method discarded 61 out of the 1561 MEPs in the dataset (approximately 4%), of which it was not able to estimate the corresponding latencies. Interestingly, the highest MAE of DELMEP corresponded to MEPs with latencies above 28 ms. This can probably be explained by the fact that the MEP dataset used to train DELMEP had latencies mostly below 28 ms (see Fig. 4 ), which could potentially affect the performance of the method. Nevertheless, DELMEP and Bigoni’s method show a similar accuracy for general applications and are about an order of magnitude more accurate than traditional hard-threshold algorithms.
From the user point of view, both DELMEP and Bigoni’s algorithms work by providing the MEP trace as an input and obtaining the estimated latency as an output. The machine learning nature of DELMEP makes it a more complex algorithm than Bigoni’s method. However, this does not translate into a disadvantage for the user, since the code made available with this publication is ready-to-use and no experience in machine learning is required for using it in research and/or clinical applications. Re-training DELMEP on a new training dataset requires just minor changes to the source code and a few seconds of running time in a regular desktop computer. Both algorithms require minimal human labor time and, unless modifications to the code are intended, minimal interventions and technical knowledge as well.
From a technical point of view, the main difference between DELMEP and Bigoni’s method is that the former is a machine learning algorithm, which “learns” how to annotate MEPs through a dataset of examples; while the latter is a rule-based algorithm, which finds the latency of MEPs by following a static set of steps. This makes Bigoni’s method simpler and a more explicable algorithm than DELMEP. However, an important advantage of our deep learning approach is the possibility to pre-train and apply the neural networks on application-specific datasets. For instance, separate models can be created for MEPs from the leg, forearm, and hand muscles, which naturally have distinct latencies 25 , 26 . Therefore, this approach may provide more accurate automated annotations for a wider set of applications. Deep learning algorithms can also be used in active-learning processes to constantly and automatically improve the accuracy of their annotations 27 , 28 , 29 , by periodically retraining them on data generated during their utilization. This is of special importance for applications on personalized medicine. As depicted in Fig. 7 , when training and testing on data from a single subject, the latency estimation errors were noticeably reduced as the size of the available dataset was increased. Thus, the proposed DELMEP algorithm could be trained on already-available annotated MEPs of one particular subject, and thereafter used to automate the annotation of MEPs of this subject, in order to ensure the best possible accuracy. This is a feature that non-machine learning algorithms do not have, due to their static set of rules.
We should note that a deep learning-based algorithm requires a large dataset for training. However, for a research lab already performing experiments using MEPs, there might be a suitable dataset available, since just a few sessions can produce thousands of MEPs. Data from previous studies are useful even if they were recorded on different muscles and using a different setup (e.g., with a different sampling frequency or stimulation paradigm). Moreover, if more MEPs are required, there is no need for the same expert to annotate them. However, DELMEP would benefit from different experts annotating different sections of the dataset, as that would reduce the chance of the algorithm overfitting to biases that could be present in a single expert (e.g., a tendency to under- or over-estimate MEPs latencies). In this regard, we note that DELMEP was trained on MEPs annotated by a single expert. There was a 0.20 ms increase in the MAE (0.50–0.70 ms) when comparing the results from testing against the same expert used for the training, and a committee of three independent experts on a different dataset. This increase could be partially caused by having used a single expert to annotate the MEPs in the training dataset. However, this is still a state-of-the art accuracy. Moreover, using a single expert to annotate the latencies facilitates and drastically speeds up the process. On the other hand, the MAE of Bigoni’s method when tested on their own dataset versus on our dataset increased 0.40 ms (from 0.40 to 0.80 ms), indicating that a variation in accuracy of this magnitude is possible even if the algorithm is not based on machine learning. As a reference, Bigoni et al. 11 found a difference of about 0.40 ms when comparing the estimation of two experts.
We should also emphasize that the low computational cost associated with our DELMEP algorithm allows it to be efficiently used in real-time closed-loop brain stimulation protocols 30 , 31 and combined with multi-coil TMS electronic targeting for fast and automated cortical mappings 32 , 33 , 34 , as it requires roughly 1 ms to process an MEP in a regular desktop computer.
Conclusions
We developed a deep learning-based algorithm to annotate MEP latencies automatically without the need for a human expert intervention. The main difference between our algorithm and previously reported solutions is that the deep learning nature of DELMEP allows it to learn and improve based on the available data, making it an ideal candidate for personalized clinical applications. The accuracy of our DELMEP algorithm was practically independent of the amplitude of the MEP, a feature only found in Bigoni’s method, as all threshold-based algorithms considered in this study failed this test. We demonstrated that DELMEP has a high accuracy on two independent datasets. The millisecond-level automated annotation in our proposed DELMEP algorithm opens a possibility for real-time assessment of MEP latencies in closed-loop brain stimulation protocols.
Data availability
The data will be provided upon a reasonable request (including but not limited to reproducibility and further related analysis). Requests should be sent to the corresponding author and the data confidentiality requirements should be strictly followed according to our ethical permission statement. The Python implementation of the DELMEP algorithm used in this study is available at https://doi.org/10.5281/zenodo.7920467 and the repository for development can be accessed at https://github.com/connect2brain/delmep .
Emerson, R. G. Evoked potentials in clinical trials for multiple sclerosis. J. Clin. Neurophysiol. 15 , 109–116 (1998).
Article CAS PubMed Google Scholar
Macdonell, R. A., Donnan, G. A. & Bladin, P. F. A comparison of somatosensory evoked and motor evoked potentials in stroke. Ann. Neurol. 25 , 68–73 (1989).
Chowdhury, F. A. et al. Motor evoked potential polyphasia: A novel endophenotype of idiopathic generalized epilepsy. Neurology 84 , 1301–1307 (2015).
Article CAS PubMed PubMed Central Google Scholar
Brown, K. E. et al. The reliability of commonly used electrophysiology measures. Brain Stimul. 10 , 1102–1111 (2017).
Livingston, S. C. & Ingersoll, C. D. Intra-rater reliability of a transcranial magnetic stimulation technique to obtain motor evoked potentials. Int. J. Neurosci. 118 , 239–256 (2008).
Article PubMed Google Scholar
Krieg, S. M. et al. Protocol for motor and language mapping by navigated TMS in patients and healthy volunteers; workshop report. Acta Neurochir. 159 , 1187–1195 (2017).
Giridharan, S. R. et al. Motometrics: A toolbox for annotation and efficient analysis of motor evoked potentials. Front. Neuroinform. 13 , 8. https://doi.org/10.3389/fninf.2019.00008 (2019).
Article Google Scholar
Harquel, S. et al. Cortextool: A toolbox for processing motor cortical excitability measurements by transcranial magnetic stimulation. https://hal.archives-ouvertes.fr/hal-01390016 (2016).
Souza, V. H., Peres, A., Zacharias, L. & Baffa, O. SignalHunter: Software for electrophysiological data analysis and visualization. https://doi.org/10.5281/zenodo.1326308 (2018).
Šoda, J., Vidaković, M. R., Lorincz, J., Jerković, A. & Vujović, I. A novel latency estimation algorithm of motor evoked potential signals. IEEE Access 8 , 193356–193374 (2020).
Bigoni, C., Cadic-Melchior, A., Vassiliadis, P., Morishita, T. & Hummel, F. C. An automatized method to determine latencies of motor-evoked potentials under physiological and pathophysiological conditions. J. Neural Eng. 19 , 024002 (2022).
Article ADS Google Scholar
Sollmann, N. et al. The variability of motor evoked potential latencies in neurosurgical motor mapping by preoperative navigated transcranial magnetic stimulation. BMC Neurosci. 18 , 1. https://doi.org/10.1186/s12868-016-0321-4 (2017).
Kiers, L., Cros, D., Chiappa, K. H. & Fang, J. Variability of motor potentials evoked by transcranial magnetic stimulation. Electroencephalogr. Clin. Neurophysiol. 89 , 415–423 (1993).
Wassermann, E. M. Variation in the response to transcranial magnetic brain stimulation in the general population. Clin. Neurophysiol. 113 , 1165–1171 (2002).
Picht, T. et al. Assessing the functional status of the motor system in brain tumor patients using transcranial magnetic stimulation. Acta Neurochir. 154 , 2075–2081 (2012).
Schmidhuber, J. Deep learning in neural networks: An overview. https://arxiv.org/abs/1404.7828 (2014).
Souza, V. H. et al. TMS with fast and accurate electronic control: Measuring the orientation sensitivity of corticomotor pathways. Brain Stimul. 15 , 306–315 (2022).
Souza, V. H. et al. Probing the orientation specificity of excitatory and inhibitory circuitries in the primary motor cortex with multi-channel TMS. bioRxiv 2021 , 56 (2021).
Google Scholar
Koponen, L. M., Nieminen, J. O. & Ilmoniemi, R. J. Multi-locus transcranial magnetic stimulation—theory and implementation. Brain Stimul. 11 , 849–855 (2018).
Makridakis, S. G., Wheelwright, S. C. & Hyndman, R. J. Forecasting: Methods and Applications (Wiley, 1998).
Diederik, P. & Lei Ba, J. Adam: A method for stochastic optimization. https://arxiv.org/abs/1412.6980 (2015).
Chollet, F. et al. Keras. https://github.com/fchollet/keras (2015).
Ziemann, U., Rothwell, J. C. & Ridding, M. C. Interaction between intracortical inhibition and facilitation in human motor cortex. J. Physiol. 496 , 873–881 (1996).
Ilić, T. V. et al. Short-interval paired-pulse inhibition and facilitation of human motor cortex: The dimension of stimulus intensity. J. Physiol. 545 , 153–167 (2022).
Wassermann, E. M. et al. The Oxford Handbook of Transcranial Stimulation (Oxford University Press, 2021).
Book Google Scholar
Rossini, P. M. et al . Non-invasive electrical and magnetic stimulation of the brain, spinal cord, roots and peripheral nerves: Basic principles and procedures for routine clinical and research application. An updated report from an I.F.C.N. Committee. Clin. Neurophysiol. 126 , 1071–1107 (2015).
Ren, P. et al. A survey of deep active learning. https://arxiv.org/abs/2009.00236 (2020).
Shen, Y. et al. Deep active learning for named entity recognition. https://arxiv.org/abs/1707.05928 (2017).
Zhang, L., Lin, D., Wang, H. & Car, R. E. W. Active learning of uniformly accurate interatomic potentials for materials simulation. Phys. Rev. Mater. 3 , 023804 (2019).
Article CAS Google Scholar
Zrenner, B. et al. Brain oscillation-synchronized stimulation of the left dorsolateral prefrontal cortex in depression using real-time EEG-triggered TMS. Brain Stimul. 13 , 197–205 (2020).
Zrenner, C., Desideri, D., Belardinelli, P. & Ziemann, U. Real-time EEG-defined excitability states determine efficacy of TMS-induced plasticity in human motor cortex. Brain Stimul. 11 , 374–389 (2008).
Tervo, E. A. et al. Automated search of stimulation targets with closed-loop transcranial magnetic stimulation. Neuroimage 220 , 117082 (2020).
Tervo, A. E. et al. Closed-loop optimization of transcranial magnetic stimulation with electroencephalography feedback. Brain Stimul. 15 , 523–531 (2022).
Article PubMed PubMed Central Google Scholar
Nieminen, J. O. et al. Multi-locus transcranial magnetic stimulation system for electronically targeted brain stimulation. Brain Stimul. 15 , 116–124 (2022).
Download references
Acknowledgements
We acknowledge the computational resources provided by the Aalto Science-IT project.
This research has received funding from the Academy of Finland (Decision Nos. 255347, 265680, 294625, 306845, 348631, and 349985), the Finnish Cultural Foundation, Jane and Aatos Erkko Foundation, Erasmus Mundus SMART2 (No. 552042-EM-1-2014-1-FR-ERA MUNDUSEMA2), the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq; grant number 140787/2014-3), the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ConnectToBrain, grant agreement No. 810377), Personalized Health and Related Technologies (PHRT #2017-205) of the ETH Domain, Defitech Foundation and the Wyss Center for Bio and Neuroengineering.
Author information
These authors contributed equally: Diego Milardovich and Victor H. Souza.
Authors and Affiliations
Institute for Microelectronics, Technische Universität Wien, Gußhausstraße 27-29/E360, 1040, Vienna, Austria
Diego Milardovich & Tibor Grasser
Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
Diego Milardovich, Victor H. Souza, Ivan Zubarev, Sergei Tugin, Jaakko O. Nieminen, Juuso T. Korhonen, Dogu B. Aydogan, Pantelis Lioumis & Risto J. Ilmoniemi
BioMag Laboratory, HUS Medical Imaging Center, University of Helsinki, Aalto University and Helsinki University Hospital, Helsinki, Finland
Diego Milardovich, Victor H. Souza, Sergei Tugin, Jaakko O. Nieminen, Pantelis Lioumis & Risto J. Ilmoniemi
School of Physiotherapy, Federal University of Juiz de Fora, Juiz de Fora, MG, Brazil
Victor H. Souza
Department of Neurology, Stanford University School of Medicine, Stanford, CA, USA
Sergei Tugin
Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA
Defitech Chair of Clinical Neuroengineering, Neuro-X Institute (INX) and Brain Mind Institute (BMI), École Polytechnique Fédérale de Lausanne (EPFL), 1202, Geneva, Switzerland
Claudia Bigoni & Friedhelm C. Hummel
Defitech Chair of Clinical Neuroengineering, Neuro-X Institute (INX) and Brain Mind Institute (BMI), Ecole Polytechnique Fédérale de Lausanne (EPFL Valais), Clinique Romande de Réadaptation, 1951, Sion, Switzerland
Clinical Neuroscience, Geneva University Hospital (HUG), 1205, Geneva, Switzerland
Friedhelm C. Hummel
A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
Dogu B. Aydogan
Institute for Computer Technology, Technische Universität Wien, Vienna, Austria
Nima Taherinejad
Institute of Computer Engineering, Heidelberg University, Heidelberg, Germany
You can also search for this author in PubMed Google Scholar
Contributions
D.M.: conceptualization, methodology, investigation, formal analysis, software, resources, visualization, writing—original draft, writing—review & editing. V.H.S.: conceptualization, methodology, investigation, formal analysis, software, resources, visualization, writing—original draft, writing—review & editing. I.Z.: conceptualization, resources, software, methodology, writing—review & editing. S.T.: data collection, resources, writing—review & editing. J.O.N.: resources, writing—review & editing. C.B.: data collection, resources, writing—review & editing. F.C.H.: data collection, resources, writing—review & editing. J.T.K.: resources, conceptualization, methodology, writing—review & editing. D.A.: writing—review & editing. P.L.: data collection, resources, writing—review & editing. N.T.: methodology, formal analysis, software, writing—review & editing. T.G.: writing—review & editing. R.J.I.: conceptualization, writing—review & editing.
Corresponding author
Correspondence to Diego Milardovich .
Ethics declarations
Competing interests.
R.J.I. has been an advisor and is a minority shareholder of Nexstim Plc. J.O.N. and R.J.I. are inventors on patents and patent applications on TMS technology. P.L. has received consulting fees (unrelated to this work) from Nexstim Plc. The other authors declare no conflict of interest.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and Permissions
About this article
Cite this article.
Milardovich, D., Souza, V.H., Zubarev, I. et al. DELMEP: a deep learning algorithm for automated annotation of motor evoked potential latencies. Sci Rep 13 , 8225 (2023). https://doi.org/10.1038/s41598-023-34801-9
Download citation
Received : 29 August 2022
Accepted : 08 May 2023
Published : 22 May 2023
DOI : https://doi.org/10.1038/s41598-023-34801-9
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.


IMAGES
VIDEO
COMMENTS
annotation: [noun] a note added by way of comment or explanation.
Annotation definition, a critical or explanatory note or body of notes added to a text. See more.
annotation meaning: 1. a short explanation or note added to a text or image, or the act of adding short explanations or…. Learn more.
annotation: 1 n the act of adding notes Synonyms: annotating Type of: expanding upon , expansion adding information or detail n a comment or instruction (usually added) Synonyms: notation , note Types: show 7 types... hide 7 types... poste restante a notation written on mail that is to be held at the post office until called for (not in the ...
An annotated bibliography is a list of source references that includes a short descriptive text (an annotation) for each source. It may be assigned as part of the research process for a paper, or as an individual assignment to gather and read relevant sources on a topic. Scribbr's free Citation Generator allows you to easily create and manage ...
Define annotation. annotation synonyms, annotation pronunciation, annotation translation, English dictionary definition of annotation. n. 1. The act or process of furnishing critical commentary or explanatory notes. 2. A critical or explanatory note; a commentary. American Heritage®...
DNA annotation or genome annotation is the process of identifying the locations of genes and all of the coding regions in a genome and determining what those genes do. An annotation (irrespective of the context) is a note added by way of explanation or commentary. Once a genome is sequenced, it needs to be annotated to make sense of it.
Britannica Dictionary definition of ANNOTATION. 1. [count] : a note added to a text, book, drawing, etc., as a comment or explanation. Without the annotations, the diagram would be hard to understand. 2. [noncount] : the act of adding notes or comments to something : the act of annotating something. the author's annotation of the diagram.
annotated: [adjective] provided with explanatory notes or comments.
The meaning of NOTATION is annotation, note. How to use notation in a sentence. annotation, note; the act, process, method, or an instance of representing by a system or set of marks, signs, figures, or characters…
Annotation definition: The act or process of furnishing critical commentary or explanatory notes.
annotating definition: 1. present participle of annotate 2. to add a short explanation or opinion to a text or image: 3…. Learn more.
The annotation needed is aria-label="name of the navigation". <section>. Any subsection of the main content. They need to have a name. The annotation needed is aria-label="name of the section". <footer>. Pretty self explanatory. Solution: visually annotate the regions' boundaries and provide the correct label for each.
However, if I annotate an inport or outport, the annotation does not appear with the signal definition in Verilog. Instead, we have the list of module inputs and outputs, then all the signals inside the subsystem, then finally we get to a list of comments - all the annotations I made in the subsystem that were not connected to a block of some kind.
annotation definition: 1. a short explanation or note added to a text or image, or the act of adding short explanations or…. Learn more.
annotated definition: 1. past simple and past participle of annotate 2. to add a short explanation or opinion to a text…. Learn more.
annotate: [verb] to make or furnish critical or explanatory notes or comment.
An annotation is a note, comment, or concise statement of the key ideas in a text or a portion of a text and is commonly used in reading instruction and in research. In corpus linguistics, an annotation is a coded note or comment that identifies specific linguistic features of a word or sentence. One of the most common uses of annotations is in ...
annotate definition: 1. to add a short explanation or opinion to a text or image: 2. to add a description or piece of…. Learn more.
annotation. annotation ( countable and uncountable, plural annotations ) A critical or explanatory commentary or analysis. A comment added to a text. The process of writing such comment or commentary. ( computing) Metadata added to a document or program. ( genetics) Information relating to the genetic structure of sequences of bases.
Annotation definition: Annotation is the activity of annotating something. | Meaning, pronunciation, translations and examples
Annotate definition, to supply with critical or explanatory notes; comment upon in notes: to annotate the works of Shakespeare. See more.
Latencies annotated with DELMEP and Bignoni's method vs manually annotated latencies for the MEPs in Bigoni's dataset. The manually annotated latencies are the mean value of the annotations of ...
First Amendment: Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.