Thursday, February 29, 2024

Shusen Wang's video tutorial: LastN feature, DIN model, SIM model

LastN features came from the last N items that a user interacted with. The process is as simple as calculate embedding vector of the item. Having N vectors, take an average and this become a LastN feature of the user. (An alternative of average could be taking attention)

DIN model is another alternative of LastN feature: taking weighted average of the feature vectors of the LastN items. (This is the same as attention.) How to get weights? Use the most relevant 500 items to the user and take average to get a vector. Use the vector to calculate the similarity with each vector in LastN features. Apply the similarity score as weight and take weighted sum to get a new vector.

How to use DIN model: given an item recommended to the user, calculate its similarity to lastN items, and then take a weighted (by similarity) sum of the LastN vector. Apply this vector as a user feature.

DIN model need to calculate attentions N times. Thus N cannot be very large. It is limited to short-term interest of a user.

SIM model: To improve DIN model: quickly removing irrelevant items in LastN to avoid calculating all attentions. This allows N to be a few thousands. Here is how: for a group of candidate recommended item, find k similar items in N items. (call it TopK.)  Then calculate attention.

   Two steps of SIM model: 1. search 2. attention. Hardsearch: find by category and keyword. SoftSearch: Take item's embedding as vectors and search. Measure the timestamp of the item from now and convert that into a separate feature embedding. Concatenate it with the item feature. SIM model needs timestamp feature because it measures user's long-term behavior (while DIN model is the short-term behavior).

During the last step SIM will pick items from the list with the relevance as reward, minus the similarity with the already picked items.

MarginalRelevance (i) = a * reward (i)  -  (1-a) * max[ sim(i,j) for each j item that has been picked ]

When the picked set is big, max [ sim (i,j) ] is close to 1, causing the algorithm to fail. The solution is not to compare for similarity at all items in the picked set - but rather only to compare with the last few in a sliding window.


References:

https://www.youtube.com/watch?v=Stbc9goPKXQ

https://www.youtube.com/watch?v=0hPep80Oy6k

https://www.youtube.com/watch?v=_4J9aF5KR84

No comments: