The Seattle MacArthur Fellow who teaches common sense to computers

When asked to think about the context behind a statement, according to COMET, a text-based artificial intelligence web application,[Person] He won a MacArthur award.” Dr. Yejin Choi gives a knowing nod to the application’s output in her shared Zoom screen: The program generates common-sense hypotheses based on simple statements. Wednesday, a week after being announced as one of 25 MacArthur Fellows by the John D. and Catherine T. MacArthur Foundation. On October 19, she’s demonstrating the program, which stands for COMmonsEnse Transformers, for Crosscut.

Choi, a professor at the University of Washington’s Paul G. Allen School of Computer Science and Engineering, received the designation and an $800,000 “genius grant” for her groundbreaking work in natural language processing. The artificial intelligence subfield looks for technologies capable of understanding and responding to human language.

Whether we interact directly with artificial intelligence; Natural language research affects us all. Whenever you ask a smart device like Siri or Alexa to remind you to buy milk. Whenever we type early in the morning, relying on the help of AutoCorrect, or allow Google to autocomplete our search queries, we’re asking artificial intelligence programs to analyze our voices. Lock and interpret our requests correctly. In addition, This technology is key to a global business strategy involved in everything from supply chain management to healthcare.

Also Read :  App created by Native Americans in San Diego launches

But computers still take our requests literally without understanding the “why” behind our questions. The processors behind AI assistants are ethical or social norms; I basically don’t understand slang or content.

Choi said, “Regardless of the language of any country, human language is fascinating and ambiguous. “When people say, ‘Can I give you a bottle of salt?’, aren’t you asking if you can do that?” So it has a lot of meaning.”

At worst, Creating AI algorithms based on content culled from the internet can riddle them with racism and misogyny. This means that they can sometimes be unhelpful and actively harmful.

Choi works on a pioneering team of research aimed at providing artificial intelligence programs that can know what we really mean and give us the answers we need in ways that are both accurate and ethical. Besides COMET, She is Grover, an AI “fake news” detector. and whether certain courses or statements are moral or not. Helped with Ask Delphi, which generates AI advice based on what the online advice community does.

Also Read :  U.S. military bill features up to $10 billion to boost Taiwan

Crosscut to talk about her MacArthur honor; She recently caught up with Crosscut to demonstrate some of her research projects and discuss the responsibility she feels to help ethically develop AI. This conversation has been shortened and edited for length and clarity.

Crosscut: How did you feel when you found out you won this award?
Choi: I’ve come a long way. That’s one way to put it. I consider myself a late bloomer: working on projects that may or may not be a bit odd and risky, but definitely worth taking risks.

The reason I chose this job was not because I expected such a reward at the end, but If I take risks and fail, no one will notice if I don’t feel one way or the other. Even if you fail, you will learn something from that experience. I feel better than I can contribute to society in this way. [by] Do what other smart people can do.

What attracted you to AI research in the first place, particularly the risky aspects you mentioned?
I wanted to study computer programs that could understand languages. I have a broad fascination with language and intelligence, and the role of language in human intelligence. We use language to learn; We use language to communicate; We use language to create new things. We conceptualize it verbally, and that’s fascinating to me. Maybe it’s because I wasn’t very good with the language when I was growing up. Now my work is writing, I need to talk more and get better.

Also Read :  8 Weirdest Gadgets at CES 2023 You Have to See to Believe

There is a saying that intelligence is really important. But that’s just a vague feeling I have. I gambled for a living.

It turned out to be much more exciting than expected.

How well does AI understand us now?
Computers are like parrots, as humans have said—much better than parrots—but they don’t really understand. And that’s the problem: if you deviate a bit from the patterns, oftentimes you start making weird mistakes that humans would never make.

Computers seem to be creative, able to create something a little weird and different, and humans make sense of it. But the truth is that there is no feeling or understanding.



Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button