TV content recommendation

User icon indicating the role I had in the project.
Mobile design lead
Calendar icon with a week highlighted, indicating a timeline for a project
6 months
Checkmark badge icon indication when a project was completed.
Launched April 2019
read more like this ☞


What story could there possibly be, didn’t you just copy Netflix? It’s easy sometimes to see two companies come out with very similar features and assume that the one that launched its feature second copied the work of the first. In reality, my team and I pushed to understand how to improve our experience for viewers and worked to differentiate our feature from competitors. Hulu launched a new interface and platform to support Live TV in 2017 and my team worked closely with our machine learning team building Hulu’s recommendation engine to understand the factors that went into presenting a piece of content to a viewer. We also worked closely with our internal customer support organization, Viewer Experience, to understand the perception of Hulu’s content suggestions.


Hulu’s interface focused on personalized collections across its apps. A content collection called “Lineup” caused frustration among Hulu subscribers due to the perceived irrelevancy of recommendations. Hulu tasked my team with figuring out how to improve the recommendations Lineup and other “For You” collections were making, but also improve viewer confidence in the system itself. We formed a cross-functional core team with representatives from engineering, product, design, research, and marketing. We asked: What should Hulu be able to ask a viewer about a suggestion, and what should that viewer be able to tell Hulu about it? Another designer and I developed a broad north star vision that would would later narrow down to a roadmap of content recommendation features.


My initial partner on this project, Jessica Baluyot and I began working with the team designing Hulu’s recommendation engine and that team’s product manager to understand how it worked. We learned how traits of TV and movie content related to one another. We learned what implicit signals the engine took as inputs. We learned from research conducted in interviews with viewers previously and analysis of customer contacts that viewers did not feel recommendations were accurate and they felt powerless to improve them. We identified the need for tools to put viewers in control.

Being avid users of other streaming and entertainment apps, Jessica and I conducted a competitive audit of feedback tools across these systems and unrelated apps like social media. We wanted to understand the types of feedback consumers used so the ones we designed felt intuitive.

We developed a series of proposals to our project stakeholders during this exploratory phase ranging from rating systems, to in-app customization and AI generated loglines. We worked with a UX researcher to test out some of these concepts with actual viewers. We worked with the researcher to develop a base understanding about early iconography, language, and general opportunities to provide feedback in our apps on web, mobile, and living room devices.

We gathered data from internal analysts on paths people took to playback and potential influences. In parallel to our exploratory design work, we needed a qualitative understanding of how people made choices when it came to long form entertainment. By this time we had additional members of our design team so we worked together to design a company-wide pop-up with three activities aimed at understanding some of the decision-making acts people engaged in when choosing their content. This pop-up design fair helped us understand the relationship between artwork, genre, and content as well as the role high level grouping like seasonal or thematic collections play in viewer perceptions of content prior to watching it.

Using the knowledge gathered from our exploratory activities, we broke the tools for user feedback into three categories: explicit user feedback, recommendation presentation, and preference management.

The core areas would touch on all the pain points viewers felt, give our recommendation engine the fuel it needed to improve it’s output, and fall in-line with our team’s goal of empowering viewers to control their viewing experience.

We held workshops and design sessions to map out our plans. We found that browsing was the most important area for viewers to give feedback.

Working with a copywriter we tested different strings that best communicated what the action would do and the signal it would give Hulu. We created prototypes using Framer and Sketch to test our concepts on viewers and simulate the response from the recommendation engine.

4 screens showing a usability testing in a lab designed to look like a living room. The first pane shows a mobile phone screen with UI for giving feedback on content recommendations. The second shows a participant using the mobile device with a researcher. The third shows an alternate view of the scene. The fourth is a closeup of the device the participant holds.

In coordination with our UX research team we conducted usability tests in our Santa Monica research lab with a variety of Hulu and non-Hulu viewers. Internal decisions guided our feature to appear in more areas of the app.

At the same time, another team launched a lightweight version of our feature set called “stop suggesting this” which aimed to get data on how many viewers wanted to eliminate specific shows from recommendations altogether. We had a wildly impressive response. We saw decreased use of the feature over time per user, indicating that viewers were seeing more relevant content.

At this time, Hulu saw a turnover of its executive team as well as a looming operational takeover form The Walt Disney Company. Executives scrapped many in progress projects across the company and limited the scope of others as it pushed new strategic initiatives. Our recommendation project was one of those that lost engineering resources. As a result, we pivoted to launch the barest bones possible version of what we were trying to build in order to get enough real world evidence to justify future iterations.


We launched Like/Dislike which upped the scope of where our feature would appear. Instead of appearing on recommended content only, we made the option to give feedback on any piece of media across the app. This produced some surprising results for us, notably that people gave positive feedback on content twice as often as negative feedback. I developed an evidence-based pitch for moving beyond MVP and we further fleshed out our original designs. I’m excited to see these features eventually launch in the future.

  • Improvement in the perception of our recommendations
  • Increased engagement when browsing
  • Increase in total minutes watched