Talk Abstract: Word Embeddings for Natural Language Processing in Python
Word embeddings are a family of Natural Language Processing (NLP) algorithms where words are mapped to vectors in low-dimensional space. The interest around word embeddings has been on the rise in the past few years, because these techniques have been driving important improvements in many NLP applications like text classification, sentiment analysis or machine translation.
In this talk we’ll describe the intuitions behind this family of algorithms, we’ll explore some of the Python tools that allow us to implement modern NLP applications and we’ll conclude with some practical considerations.
Bio: I’m a freelance Data Scientist based in London. Backed by a PhD in Information Retrieval, I specialise in search applications and text analytics applications, and I’ve enjoyed working on a broad range of information management and data science projects. Active in the PyData community, I help co-organising the PyData London meet-up. I’m the author of “Mastering Social Media Mining with Python”, published with PacktPub.