“In the early days of graphical user interfaces, people had to invent scroll bars and menus. What is the scroll bar of AI? We don’t know yet.”
We’re living in the Wild West of artificial intelligence, with products and companies popping up so quickly that we’ve had little time to understand what it truly means to integrate AI into our everyday lives.
A new initiative at Google, called People + AI Research (PAIR), is the company’s attempt to rein in its horses and focus on distilling user-centric design principles to govern interactions between humans and artificially intelligent systems. How? For Fernanda Viégas and Martin Wattenberg–the two senior staff research scientists on the Google Brain team who are running PAIR–that means refocusing on different types of users and studying how AI can meet their needs–and then publishing design tools and guides for everyone from researchers to casual users.
“Research could start to shed light on what are some of the first principles in user interaction that as technologists we should have in the back of our minds,” Viégas says. “What should be our guide? What should be the way in which we’re thinking about building new applications?”
In that sense, it sounds like PAIR’s ultimate goal is to do for AI what Google’s Material Design guidelines did for user interface design, establishing best practices for designers using AI and framing the company as a leader in human-first AI design.
These questions are particularly important right now because the way that people interact with technology is fundamentally shifting. AI and machine learning are unlike any other technological paradigm in that they can change without their creators’ input.“Usually when we talk about humans interacting with computers, computer programs traditionally tend to be static in a sense,” Viégas says. “You’re given the rules, you know the different behaviors; they don’t necessarily evolve in the same ways that machine learning systems can evolve. That’s part of the opportunity, to understand how do you design for that new kind of interaction.”
Viégas and Wattenberg both have backgrounds in data visualization. Before joining Google, the two led IBM’s Visual Communication Lab and founded a commercial data viz studio called Flowing Media. Despite how distinct data viz and machine learning might seem, the researchers see parallels–which is what drew them to study machine learning in the first place. After all, data viz takes big complex data sets and distills them for human consumption; machine learning also processes large amounts of data, looking for patterns. Both mediums act as an interface between the data and the user.
There’s also a similarity in how the two technologies have been adopted. “There’s an analogy in my mind with work in data visualization,” Wattenberg says. “We saw the technology go from something that’s fairly esoteric and used by researchers to something that’s part of everyday culture. That’s an inspiration.”
PAIR will focus its research on three main user groups: engineers and machine learning experts, domain experts who might benefit from using AI (like scientists, doctors, or musicians), and everyday people without technical expertise in machine learning. The initiative is guided by a series of core questions that people from across Google are tackling from the perspective of each of these three types of users–how do you design interfaces to make algorithms, which are often “black boxes,” more transparent and understandable? How do you create educational materials that democratize machine learning for a wider audience? What can humans learn from machine learning systems?
One way Google is examining the last question–in this case, from the perspective of a domain expert–is through a collaboration with Brendan Meade, a professor at Harvard who studies earthquakes. Viégas explains that Meade’s research often entails working with high-power computational machines that crank numbers for hours to complete tasks like predicting the aftershocks of an earthquake. She says that he’s working with Google to use deep learning networks that can approximate some of those results in far less time and for less money. While this would let scientists like Meade do science cheaper and faster, it could also have deeper implications.
“There might be something that these simple networks are learning about the physical world that we had not been able to learn so far because we so far need tons of math to do the same calculation,” Viégas says. “If we can learn back from these systems what they just learned about the physical world, could we be better scientists? Could we do better science?”
While it might seem like Meade’s work has nothing to do with Google as a company, Viégas and Wattenberg are thinking about Meade as a user. If they can learn how to build tools that scientists like Meade want to use for their research, there’s a huge market potential in the academic world.
The benefits for Google are clear: PAIR will help it understand more about users of AI and move the technology forward. It could also help frame the company as a responsible–and trustworthy–standard-bearer of a world-changing technology by establishing rules for how machine learning is designed into products, akin to the way Material Design helped Google build a reputation as a design-led company and influence UI design across the industry.
For Wattenberg, PAIR is essentially about developing an entirely new interface paradigm–which also means understanding what doesn’t work. The worst machine learning system, in his opinion, is one of the earliest examples of what’s now called artificial intelligence: Eliza, a program that pretends to be a psychotherapist. Beyond the fact that it’s not very smart, Wattenberg says that at some point it always breaks, meaning that every user will end their experience disappointed.
Disappointment in what AI can do right now is particularly rampant with today’s conversational interfaces. But it also means there’s a great opportunity for Google to establish what a user experience should look like for AI, and that’s where PAIR comes in.
“In the early days of graphical user interfaces, people had to invent scroll bars and menus,” Wattenberg says. “What is the scroll bar of AI? We don’t know yet.”
This article first appeared in www.fastcodesign.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: firstname.lastname@example.org or visit www.groupisd.com