

When the consumer version of Google Glass hit the scene in 2014; It was heralded as the beginning of a new era of human-computer interface.
People can go about their day with all the information they need right before their eyes.
Eight years later, how many people have you seen walking around wearing smart glasses?
As Stanford professor Elizabeth Gerber puts it, the lesson here is that “technology is only accessible to people if they want it.”
Speaking recently at the Stanford Human-Centered Artificial Intelligence Winter, “We didn’t want to wear Google Glass because they invaded our privacy. We didn’t want to wear it because it changed human interaction. When you remember Google Glass, people want to see what AI can do.” (For a comprehensive overview of the entire conference, see Shana Lynch’s article on the HAI site.)
“Designing AI that people want is important to make sure it works,” Gerber continued. Another lesson learned was the adoption of AI-based tutors on Zoom during the COVID-induced closure of schools – enabling children to take subjects off. It is the same as the workers required to work with AI systems, he added.
Additionally: The problem with AI: You’re not. It is data.
Designing human-centered AI involves better interaction with people across the industry, and it’s often a difficult task to get everyone to agree and agree on which systems are useful and valuable to the business. “Having the right people in the room doesn’t guarantee consensus, and in fact the results often come from disagreement and discomfort. We need to deal with productive discomfort,” said Australian National University Professor Genevieve Bell, a speaker at the HAI event. “How do we teach people to be comfortable in uncomfortable places?”
Some may even say that the AI is no better than the AI. “Sometimes AI is not the best AI,” Gerber points out. “As you design, take this human-centered approach and design for people’s work. Sometimes a script is all it takes. Instead of taking an AI-first approach, take a human-centered approach. Test the design iteratively with people. To increase their job satisfaction and engagement.”
When designing AI, it’s probably best to avoid trying to make AI more human-like, such as working with natural language for intuitive and interactive conversational interfaces. In the process, The system’s ability to make people more productive may be diluted or lost altogether. “Look what happens when someone who doesn’t get it designs a signaling system,” said Ben Shneiderman, a professor at the University of Maryland. “Why conversational? Why is it a natural language interface, which is a good place for a design of a structured system with different parts, according to the concept of immediate structure?”
Additionally: The real goal of AI is not to become intelligent.
“The thinking that human-computer interaction should be based on human-to-human interaction is poor design,” Shneiderman continued. “Human-human interaction is not the best model. We have ways to design it better, and the shift from natural language interaction is obvious. There are many ways we should go beyond that model, reimagining the idea of developing tools — supertools, telebots, and active tools.”
“We don’t know how to design AI systems to have a positive impact on humans,” said James Landay, assistant director of Stanford HAI and conference organizer. “There’s a better way to design AI.”
The following recommendations emerged from the conference.
- Reframe and redefine human-centered design: Panelists highlighted the need for a new definition of human-centered AI – systems that improve human lives – and systems that challenge the problematic incentives that drive the creation of AI tools. Current efforts are based on denying human expertise, says Shneiderman. “Yes, people make mistakes, but they’re also extraordinary in their creativity and expertise. What we really need to do is build machines that make smart people smarter. We want to enhance their capabilities. We understand that a lot. We design with constraints, guardrails, barriers. These are things that have been in human life for 70 years in literature: How do we prevent failure? So the oven you clean yourself is 600 Once it goes beyond a degree Fahrenheit, the door can’t open, right? It’s built with a lot of technology. That’s design at work. It’s the right design that needs to be built more. And we need to improve human skills while reducing the chance. Wrong.”
- Find multiple perspectives: Worked by Jodi Forlizzi, a professor at Carnegie Mellon University; managers, According to Saleema Amershi, senior research manager at Carnegie Mellon University, who calls for multidisciplinary teams made up of software designers and other disparities, “We have; Even if we have people like these designers or designers who understand human-centered behavior to reshape some of our processes, most of those people are in a room where we can’t make decisions about what to build. Have process and AI people working with early stage technologists.”
Additionally: Artificial Intelligence – 5 Innovative Applications That Can Change Everything
- Rethink AI success metrics.: The question is often asked what these models can do, but people need to ask what they can do with these models.” Amershi says, “AI is currently optimized for accuracy; But not accuracy. A single value. Designing for human-centered AI requires human-centered metrics.”
- Put humans in the ring — and AI can easily overcome: “We want AI models that understand, predict, and control,” Shneiderman said. “It’s a long-standing perception that you’re in charge and you can overcome it. Our cameras have come to rely on reliable, reliable things like shutter speed, focus and color balance. You can see if the focus is wrong and adjust. The mental model is that users should have control panels where they can get what they want, and the system gives them previews, some give them opportunities, but they can be overwhelming.”