Background

What is Actions on Google?

Actions on Google is a development platform for the Google Assistant. It allows the third-party development of "actions"—applets for the Google Assistant that provide extended functionality.

Project brief


Redesign and develop the Actions on Google site in order to clarify the multiple building paths and solutions for developers. The vision is to enable developers to build with confidence, with clear and simple access to the right resources and “solutions”.

Team: 1 Product Lead, 1 Scrum Master, 2 Product Designers, 1 Developer


Sprint One


Getting Sorted: Auditing, Organizing, Strategizing

• Stakeholder interviews

• Goals and Benchmarks

• Personas

• Audit via. user interviews

Stakeholder vision

Taken from interviews

Clarifying entry points
Make the message clear on the variety of ways developers can start to build

Developer education
Revamp of developer documentation
Identifying and surfacing necessary information early and upfront

Leveraging Google’s reputation
Uniquely positioned to leverage trusted reputation within developer community

Encouraging learning and exploration
Adoption will come through compelling use cases

New platform, new technology: understand risk-reward for developers




Defining goals and creating benchmarks for research

Goal 1: Clearly communicate the value of building for the Google Assistant

Benchmark 1: Developers understand what Actions are & Increase in interest in building Actions


Goal 2: Increase developer interest and confidence in building Actions

Benchmark 2: Developers are able to articulate reasons why they would build an Action and accurately choose correct paths for building 

Goal 3: Grow a highly-engaged Google Assistant developer community

Benchmark 3: Can clearly state the benefits of joining the community 

Personas

Based on surveys and preliminary interviews

Takeaway 1

Developers don’t know how to build for the Google Assistant 

1. Lack of understanding about how to build on a new platform leads to lowered confidence
2. Lack of emphasis on developer’s preferred methods for learning and developing on landing pages 

Takeaway 2

Developers don’t know why to build for the Google Assistant 

No emphasis on addressing key developer motivations such:
• Monetization
• Career advancement
• Potential audience
• Community incentives

Takeaway 3

Developers don’t know what to build for the Google Assistant 

1. Addressing lack of inspiration
• Lack knowledge on what already exists
• Lack knowledge on what is possible to be built
2. Fighting “blank canvas” syndrome

Principles and tactics


Sprint Two


Navigation and systems

• User journeys

• Navigation

• Design + Visual System

• Content & UX Writing 

Prioritizing supporting content

We conducted card sorting on a group of 12 developers, asking them to sort types of content into 3 categories: the content that was most important and relevant to them when developing an Action, content that was somewhat important, and content that was not important. 

Navigation

Readjusting the Navigation to include the new, additional pages but also prioritizing supporting content (previous slide).

Goal: Leveraging the nav to provide additional clarity to the possible solution pathways.

Note: The navigation was solidified during another phase of user testing, where we A/B tested tabs vs drop downs.



UI and component updates


Revisiting the design system and update to fit the standards of Dev Site 2, the platform for Google's developer consoles

This includes buttons, grids, typography, cards, modules, interaction, ETC.


Visual system

Started establishing a new visual / illustration system and iterate based on stakeholder feedback, ease of scaling and to create consistency with the assistant.google.com

Weekly check-ins happened between Sprints 2-5

Fun fact: I actually did the old illustration system in 2018, which was used in the console and marketing materials for Google I/O


Sprint Three


Sketching & wireframing

Homepage wireframing
Subpage wireframing
2 Prototypes with different navs
Script writing for user interviews

Wireframing

Leveraging our “principles and tactics” to guide content and hiearchy 

We made the decision of breaking out these key pathways out into subpages, in order to satisfy all the personas’ use cases, and achieving our previously stated goals:

Extend your mobile app
Provide faster ways for users to access your Android app via Assistant.

Build rich and natural conversations
Build custom voice and visual experiences for smart devices.

Enhance your web presence
Present your content in rich ways for Google Search and Assistant.





Sprint four 

Engaging the assistant developer

User interviews for Assistant developers
Research synthesis 

Testing methods

Comprehension testing
Learning goal: Discover if, after viewing the redesigned page, developers demonstrate understanding of Actions on Google and an increased interest in developing an Action.

Task completion
Learning goal: Discover if, after viewing the redesigned page, users can successfully match their intent to the optimal path.

Card sorting
Learning goal: Discover which pieces of content are most relevant in motivating users to create an Action.

Testing participants

We leveraged our established personas in order to pick out a developer candidates that had similar use cases and goals. Our pool consisted of 12 developers, with 4 users for each persona. 

Comprehensive testing


Testing
Look at page for 30 seconds and answer questions. “After seeing this page” --
• Can you describe what Actions on Google is?
• Can you again rank your interest in developing an Action from 1 (lowest) - 5 (highest)? • What kind of content would you expect to see after clicking on “Learn more”? 

Success measure
• Success understanding content
• Matched expectations for CTAs
• Increased interest in developing Action

Task completion 


Testing
Match a given intent (i.e. “You’re a developer working for a food delivery company with an existing app”) to the optimal way to build.
• [If relevant] Why did they select that particular way to build for this intent?
• What kind of content do they expect to see after clicking on the CTA for that particular way to build?

Success measure
• Success understanding content types
• Success matching developer intents
with the optimal paths

Card sorting


Testing
25 cards with content modules printed on them and having them sort the cards into 3 general buckets: Most important, Somewhat important, Not important.
Blank cards provided to fill out if they feel any content is missing.
• Based on their level of experience with the Assistant (new, returning, veteran) what content did they feel is the most important for someone of their level of familiarity?
• Was there any type of content they felt was missing?

Success measure
• Opinion about relevance of content
• Opinion about engagement level of content

Results overview

“Text heavy - too much to read”
“I’m not sure if I’m in the right place for my use case.”
“Hard to differentiate paths quickly through text.”

Identified opportunities
• Proceed with Drop down (mega menu) instead of tabs
• Leverage visual design to provide differentiation in paths
• Adjusting copy to be more concise
• Reorganizing page hierarchy based on card sorting 



Sprint Five


Iterating and improving

Implement revised content / hierarchy
Plug in finalized assets
Apply design system + updated components

Application of feedback

Based on previous testing we:
• Proceeded with Drop down mega menu’s
• Leveraged visual design to provide differentiation in paths
• Adjusted copy to be more concise
• Reorganized page hierarchy

Added a page on the surface of documentation that served as a comprehensive library menu 

Sprint Six

Final testing & handoff

Final rounds of usability testing + aggregated findings
Key software and design documentation completed
Completed implementation (dev)

Final testing

We used the same methodology and testing techniques as we did in Sprint 5, however recruited 12 new participants to give the site fresh, unbiased eyes. We tested the webpages in full fidelity, as you will see below. 

Results

Launched in Q4 of 2019, the live site can be found here. To recap, below is the progressive improvement based on the system usability scale starting from to where we were briefed, to when we launched. You can visit the live site here

Original site

Benchmarking System Usability Scale (SUS): 33% 

Initial audit with 12 developers, without personas(original Actions on Google site.)

Developers didn’t know how, why or what to build


Testing Round 1

System Usability Scale (SUS): 62.12%

Key points of friction addressed:

Text heaviness
Unclear choices (“not sure where I’m supposed to go”)
Lack of tangible or concrete examples (“what would building this do?”) 
Lack of patterns for easy path recognition (“Not sure how to get back to that page”)

Testing Round 2

System Usability Scale (SUS): 88.6%

Pathways were clear and developers were able to find their solutions

Interviewees were able to identify the value props of building for the Google Assistant

Interviewees were able to understand how to build

Examples and visuals helped confirm that users were where they wanted to be 

Using Format