Martin Boyanov
1 min readMar 6, 2019

--

The point of word embeddings is that they capture semantic similarity. Basing the similarity calculation on spelling alone will fail to do that. Will ‘beautiful’ be near ‘gorgeous’?

You can try training on a language model task in parallel where you sample misspelled words during training time.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Martin Boyanov
Martin Boyanov

Written by Martin Boyanov

Data Scientist passionate about NLP and Graph Modeling

Responses (1)

Write a response