In the world of SEM, short for search engine marketing, advertisers bid to have their ads placed at the top of a search engine’s results page. TripAdvisor, like most companies, is an active player in this space; if someone is searching Google or Bing or any other search engine for something that we have, we want to make sure it’s known that we have it by getting our listing as close as possible to the top of the search results page – since that’s the part of the page users are most likely to interact with. When a user clicks on one of our ads to come to our site, that’s when we pay the search engine for having shown our ad. This means it’s important for us to monetize this traffic; otherwise, in the long run we’ll be incurring a high cost with little return. A big part of monetizing that traffic is making sure that when users click on an ad to come to our site, we’re landing them on a page that has the most relevant content as it relates to what they searched for.
For our Experiences business, this means surfacing the right products – tours, tickets, activities, classes, etc. – so that customers can find what they want on our website as quickly as possible. With a rapidly growing inventory of over 200K bookable products across multiple brands – from surf lessons in Hawaii to helicopter tours over the Grand Canyon and everything in between – this is a challenging problem. Here, we provide a closer look at how we’ve improved our SEM landers for one of our Experiences brands, Viator.
The whole idea behind SEM is to pay for real estate at the top of a search results page. Every time a user makes a search on Google (generally this is true for any search engine, but let’s just focus on Google for simplicity), Google runs a real-time auction across all of its advertising partners to determine which ads get shown and in what order – if any ads get shown at all. You can read Google’s description of this process here.
These are not flashy ads, in fact if you’re not looking closely you very well might not notice that they’re ads at all; they mostly just look like regular links, although Google will put a small box with the word “Ad” inside next to most of its SEM links so that you know it’s a paid placement (this is not necessarily true for other paid placements at the top of Google’s search results pages, but this post focuses on traditional SEM links). When a user clicks on one of these links, they are brought to a specific page on our website, a.k.a. the landing page. See two examples below.
The underlying process at play here is that the queries that users type into the search bar get mapped to keywords. Keywords are short phrases that advertisers submit to Google to be used in these auctions. Typically these are phrases that we think people may search for, and that also reflect content we have on our site (across TripAdvisor and Viator, we have several million keywords that we bid on – some very specific and others quite broad). If a user’s search query closely matches one of our keywords and we have a high enough bid, our ad will be shown on the search results page. In the examples above, the ads are outlined in black boxes in the left panels.
Because we’ve created all of our own keywords ourselves, our SEM analysts have an idea of which existing page on our site might best capture the content we think a user is looking for when their query matches our keyword. Historically, that’s how we’ve assigned landing pages to keywords, and the two examples above are of landing pages that were assigned via this somewhat manual process.
Towards Better Landing Pages
While it’s completely reasonable to send someone who searches for “things to do in Barcelona” to our Things to Do in Barcelona page; or to send someone searching for “blue grotto tour capri” to a Blue Grotto Tours page, we wanted to see if we could do a better job surfacing the right content for our SEM users. Maybe the current landing pages have become stale and don’t have quite the right product mix anymore. Or maybe, as our inventory continues to grow rapidly, there are now too many options on these pages and users are suffering from analysis paralysis.
Indeed, putting the right content in front of our users at the right time is a big part of what we do at TripAdvisor. Typically we think about this type of personalization in terms of a user’s browsing history on our site, and how we can leverage that historical pattern of product views, clicks, and purchases to inform which features or products we should serve or suggest to a user in their current session. But here we have another opportunity for personalization – at the keyword level. Could we create pages with a better mix and/or ordering of products such that their content is more in line with what a user coming to us from SEM would expect?
When users come to our site through SEM, we know what keyword they click on to get here and we know what pages they interact with, and which products they view, click on, and potentially purchase when they’re on the site. Because we believe that views and clicks might not capture a user’s true intent – for example, maybe the existing landing page isn’t good anymore and the user has to click around for a while to find what they want – for the purposes of training our model we look only at SEM sessions that ended with a purchase. This is a closer representation of the customer’s true intent with respect to their original query, although the signal is a bit sparser. What this gives us is a mapping of keywords to products. We then frame this as a supervised learning problem where given a keyword, we try to predict the most likely products to be purchased, assuming a purchase is going to be made. To that end, we collect several keyword-specific features – things like Geo ID, word2vec embeddings of the keyword term, account information, etc. Additionally, since the keyword to product mapping forms a bipartite graph, we project that graph onto the keyword space and perform a graph-based clustering, also known as community detection, to create another feature that we call the keyword community. The figure below helps explain how to project a bipartite graph onto one of the sets of nodes.
If you imagine the X-nodes above are the keywords and the Y-nodes are products, then we perform clustering on the space (b). (Image credit: Wikipedia).
Using the features described above, we train an extreme classification model to solve this task. We talked about a very similar approach for user-level recommendations in a previous blog post. We largely base this approach off of work by Google/YouTube. Although in that work the extreme classification model is used as a candidate generation step, we find that for many of our use-cases it works well as a standalone method for recommendation. The specific architecture we used for this problem is given below.
On the left, we have our high-dimensional categorical features, for which we learn embeddings before flattening and concatenating with lower-dimensional, one-hot-encoded categorical features and continuous features, represented on the right side of the graph. These are then fed to a single, very wide 4096-neuron hidden layer, and then out to a softmax over our product space. The model is implemented using Keras in Python on a GPU machine.
What We Found
After seeing very promising offline results, we ran an online A/B test to evaluate performance vs. the current system. We deployed our test using Google’s drafts and experiments, allowing for a nice clean split between the test and control groups.
Above you can see an example of our control group (left) vs. the results generated by our model (right). In the test group there is a new product in position one, and while the other two products were also on the control group landing page, they appear in different positions now.
Our test ran for just over a month, and we saw a substantial improvement in almost every key engagement and conversion metric. Interestingly, not only did we see improvements in rate metrics, but we also saw a significant increase in overall volume. This is possible because there are two factors at play when Google decides which advertisers win an auction – bid and landing page quality. Our bids remained the same across test and control groups, which means not only did our users think these were better pages, but Google thought that these pages were better, too.
We introduced here a recommendation model that we’ve used to improve the quality of our landing pages for our Experiences business. The model is currently live and powering SEM landing pages on viator.com, one of our Experiences brands. Because keywords capture relatively static concepts, we only refresh the model monthly to keep up with any shifts in demand that may be caused by seasonality. Training and prediction are executed on our in-house data science platform.
There are also, without a doubt, a number of ways we could try to improve the current model. The most obvious would be to reframe the problem as a pairwise classification problem, giving the model as inputs a keyword and a product (and all associated keyword and product specific features) and performing a binary classification to predict whether it’s likely that someone entering on the given keyword would purchase the given product. By leveraging both keyword and product features, this additional information should improve our predictions and allow the model to generalize better. However, there are challenges inherent in this type of approach, including how to perform negative sampling and scale, given that the size of the training data will increase substantially.
Work like this is always a collaborative effort, and while here we focus on the machine learning portion, our SEM analysts, Revenue Management, and Data Science Platform Engineers all play a big part in making this project and projects like this happen.
Andrew Correia is a machine learning manager on TripAdvisor’s Experiences and Rentals team, where he works on problems ranging from recommendations and NLP, to multi-touch attribution and causal inference/experimental design. He graduated in 2013 with a PhD in Biostatistics from Harvard University. Prior to joining TripAdvisor in 2017, Andrew worked in a few different areas, including policy evaluation, recommender systems, and A/B testing.