How AI-model Named Entity Recognition makes search more relevant?

Query understanding is a very important part of the journey from the time a user makes a query to the point search results are shown. A typical  search engine involves 3 phases: 

  1. query expansion (in native query language), 
  2. getting results with query criteria,
  3. scoring results.

Precision and recall are then derived from scoring and query expansion phases.

Understanding the query intent in an eCommerce search system can be quite challenging due to the varying nature of the products being sold in an eCommerce store across verticals. This wide spectrum of the eCommerce domain makes it very important for us to consider user actions (clickstream) for a specific query, specifically for head queries.

One of the most common ways to understand query intent is Named Entity Recognition (NER, as we call it) Consider a query blue check casual shirt for men, entity recognition for this query would be blue_color check(s)_pattern casual_style shirt_type for_other men_gender.

This is how AI-model Named Entity Recognition works for the search queries in a contextually relevant system

NER enables us to perform contextual query expansion which can be then fed to our search engine for culling out a more precise and relevant set of results as compared to the traditional query expansion. The most common ways of query expansion include using generic synonyms, antonyms, spell checks which apply to all the entities/attributes. One of the basic disadvantages of this approach is the expansion of ambiguous entities like a cap which can be a sleeve type for queries like cap sleeve dress and type for other queries like a cap for full sleeve dress.

Query expansion without using NER and entity specific synonyms can totally change the meaning of above queries for example:

cap sleeve dress will become 

(cap | cap_synonyms_fashion)  (sleeve | sleeve_synonyms) (dress | dress_synonyms). 

However, if we were to use NER and entity-specific synonyms it would be 

( cap sleeve | cap_sleeve_sleevetype_synonyms)  (dress | dress_product_type_synonyms) 

where cap_sleeve_sleevetype_synonyms is a synonym for cap sleeve specifically in the context of sleeve type and so on.

So without contextual query expansion, we would end up showing results for caps along with the dress. However, with NER we would show results for dresses that have cap sleeves which was the user intent.

The above-mentioned approach involves two important processes:

  1. Entity Recognition
  2. Query Expansion using entity synonyms

While entity extraction is not an easy problem to solve, query expansion can also be very complex. To keep things simple we will not consider dependent entity synonyms like what should be relevant product types when sleeve type is a cap or vice versa. There can be a lot of heuristics to do this.

We can have two approaches to entity recognition:

  1. Simple Entity recognition models
  2. Joint models for Entity recognition and user intent (product type(s) or category)

Both the approaches require the following data sets – Training data:  Training data for entity recognition would be a user query tagged with all its entities and user intent. For example, data for red velvet cake would be:

AI model -Named Entity Recognition

Needless to say, all the entities would have some score associated with them that comes out of clickstream. This score signifies the importance of each attribute for a query and finally for the whole data set i.e customer catalog.

Training data can be handcrafted or auto-generated, at our scale we prefer to go with auto-generated. Following are sources we choose to go with:

    1. Catalog data (product data)
    2. Clickstream data (user actions (click/cart/buy) for a specific query and a product)

We combine clickstream and catalog data to come up with impression scores for each query based on which we decide if data set for that query qualifies for the training data set.

Above mentioned approach is used to generate NER tags for the historical queries of the customer and then we generalize this understanding via a machine-learned (ML) model so that given new queries, the model can make entity and intent prediction for various phrases in the query.

NER is a natural language processing problem which involves sequence-to-sequence labeling, where given training data is of the following format.

NLP labelling for Named Entity Recognition

Input sequence is the query terms and output includes are the corresponding entity tags and the query intent. We learn a model such that for new input query terms the model outputs the predicted entity tags and the query intent.

  1. Conditional Random Fields (CRF): CRFs are a class of statistical modeling methods and take context (neighboring tags) into account when predicting a tag. The features used to generate this are mainly the current word, next/previous words, labels of next/previous words, prefix/suffix of the words, word shapes (Digits/Alphabets, etc), n-gram variations of all these. While this ensures we consider the context to predict more relevant tags it also needs enough training data set to produce good recall.
  2. Recurrent Neural Networks and Convolutional nets based models: We tried out a variety of neural network-based models for sequence tagging search queries. Transition learning can be simply understood as a state machine kind of approach where the input sequence passes through multiple states and decision is made at each state to generate the label for that state. It takes considerable large training time and the performance optimization might require good enough infrastructure but we were able to achieve state of the art performance (> 99.9 % F1 scores) for these models. These models include the following:
    1. bidirectional LSTM networks (BI-Long Short Term Memory [1])
    2. Bi-LSTM with char CNN
    3. Bi-LSTM networks with a CRF layer (BILSTM-CRF)
    4. Bi-LSTM – CNN – CRF with the intent prediction

Each of the above models has its advantages and shortcomings, while one model might work for short queries,  others may be suited for different types of queries. We, therefore, go with an ensemble approach that considers all the model outputs and chooses the best entities for a given query.

We have built models for two types of data sets:

  1. Domain-specific Models: Domain/vertical specific models are the ones where we have identified a set of common attributes/entities for that vertical. We try to fit all the queries for that vertical on it. It makes it easier for us to enable default relevance for a customer. However there are cases where a customer could belong to a subset of a vertical or a combination of verticals, these are the cases we might not see a good performance from generic vertical-specific models.
  2. Customer Specific Models: These are the datasets that do not fit into a certain vertical or have attributes that are uncommon. A few challenges which occur while building such models are catalog quality, getting training data for uncommon attributes.
    1. Catalog Enrichment: Some of the customer catalogs are not very clean/structured. This is where we do catalog enrichment to include additional attributes that describe the product in a more structured manner. Catalog enrichment in itself is a very interesting and challenging problem. Once catalog enrichment is done, our NER models start performing well.
    2. Attribute selection for recognition

We have designed our model training pipeline in such a way that automated feedback is captured based on model training frequency. Models are retrained when we have good enough differential clickstream data. Since we use our NER model output to score the search results, feedback is captured in the clickstream, which is then used to retrain the models. And all of this is a continuous process.

We keep running EDA and internal a/b tests to determine the accuracy and performance of our NER models. This exercise also helps us decide the training frequency of these models. We also keep adding models based on the latest enhancements in the natural language domain (and processing techniques). There have been significant efforts made towards generating contextual word embeddings (e.g: Elmo, InferSent, and BERT) which help us improve our NER models with each new version being pushed to production. In case you want to know more about the Entity Extraction and previous accomplishments, you can read our blog here.

The post How AI-model Named Entity Recognition makes search more relevant? appeared first on Unbxd.



from Blog – Unbxd https://ift.tt/2PZB9xi
via IFTTT

No comments:

Post a Comment

New Government – Labour Small Business Agenda

We’ve are all waking up to a new Government today, with the Labour party about to take control of the country and what should be top of your...