top of page
  • Writer's pictureJill Medeiros

The Future is Now: 5 Takeaways on Artificial Intelligence from a Communications Lens



Whether you're aware of it or not, we've all interacted with artificial intelligence (AI) at one point or another. That Gmail rollout with the auto responses with just a hit of the tab button? AI. Talking to a "customer service" representative on Facebook? AI. SmarterChild, Amazon Echo, Siri? Artificially intelligent confidants and friends.


We've even helped train machine learning! reCAPTCHA, anyone?


And yet, with all these touch points with AI and the various points in which it directly impacts my life, the concept has still been elusive to me.


That is, until I got to dip my toe in the AI water at Comm Lead Connects: Less Machine, More Learning: Everything You Need to Know About AI, a "masterclass" and conversation hosted by the University of Washington's Communication Leadership graduate program this past Friday, January 25, 2019.


I attended Comm Lead Connects with 200+ of my fellow communications and tech enthusiasts and dove into what artificial intelligence is, how it can be utilized (and weaponized), and how we can create progress while considering the ethical implications. With a number of thought leaders on the topic from Microsoft to Facebook to HTC VIVE and even the ACLU of Washington, there was food for thought to satisfy an entire week of meal prep.


As a newbie to AI conversations, my interest was piqued, feelings enraged, enlightened, and ultimately informed. But before we get to the takeaways, let's start with some base definitions.


Artificial Intelligence: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.


Machine learning: Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.


Now that we have that light conceptual foundation, let me try to boil five hours worth of conference content down to my 5 takeaways to help you navigate AI from a communications lens:

 

1. "AI doesn't make decisions for you. It helps you get to decisions faster." – Vinay Narayan, VP, Product and Operations (Americas), HTC VIVE


There's an understandable worry that AI will take over the world and that humans will lose access to autonomy (and jobs), but there are valuable reasons not to fear it and, rather, embrace it.


For context on AI systems, Noelle LaCharite, Director of Developer Marketing at Microsoft, set the stage with different ways we can utilize artificial intelligence to help change the world. An "aha" moment, she showcased how AI can help distinguish Chip from Dale through a series of "spot the difference" clicks with Microsoft's Azure Cognitive Services. With tools like this, AI becomes much more accessible to the non-developer and allows individuals to leverage already established models, such as scanning a large document for a quote.


Ultimately, the developments that Microsoft made in the space translated seamlessly to the release of the JFK assassination files. As Noelle shares in her talk, a few years ago, thousands of documents were scanned (as images, of course) for individuals to access. But it wasn't PDF rendered, you couldn't simply "⌘+F" a term like "Oswald" and yield any results. Instead, you would have to painstakingly sift through the documents page by page.


But, through "souped up AI in motion," Noelle describes, Microsoft was able to utilize optical character recognition (OCR) to identify text from images and transform those practically unreadable documents into an accessible search engine for any individual with a computer and internet to scour.


As an added bonus, Noelle developed a way to "super easily interact with the software." The J. Edgar Hoover chatbot, which you should totally give a try.


2. We have to build (language) with care.


Ok, now let's play a game of word association.


I say "newspaper," you'd probably say journalist, events, politics, art, sports.


I say "exercise," you'd probably say walking, running, BMXing, Peloton-ing.


I say "content strategy," you say... artificial intelligence?


Yes. You'd say AI.

Content strategy at its core is language and empathy. What content strategists do as it relates to AI is help make these "artificial" experiences better and more meaningful for people in their human interactions.


The Gmail rollout or M suggestions on Facebook Messenger is a perfect example of this as it relates to predictive text suggestions. How do we determine realistic choices of words users might select despite the automation? How do we maintain authentic voice of the user? How do we make the experience better?



Two content strategists from Facebook's Messenger and News Feed teams, Angela Pham and Jasmine Ty, shared the five things we need to think about as we build these systems with care from a content strategy perspective.

  1. Who is doing the talking? There is a fine line between empowering and inhibiting user expression and it's important to consider who's perspective the voice is coming from. Is it authentic?

  2. What are we actually helping people do better? Do we want users to save more time or have more meaningful conversation?

  3. What are the language principles? What are our rules around internet speak? About misspellings? Is it too prescriptive on certain communities? Capitalization at the beginning of sentences is grammatically correct, but is it how users would communicate?

  4. How do we serve the world, not a subset? There can be a lot of bias in the choices that we make. Are certain assumptions being made through use of suggested emojis (skin color, gender, etc.)? We must try to be as universal as possible in the choice suggestions that we make.

  5. How do we build experiences if we're not the ones building the products? "We might not be writing the code," Jasmine explains, "but you're telling the story, articulating the vision, and helping share the experience."


3. Tech is a set of value choices. And tech without values can and will worsen bias.


Like any organization, values drive business decisions and, ultimately, ethics. I just dove head first with my shoes and socks still on into both Fyre Festival documentaries on Hulu and Netflix and there was a clear disconnect of values and ethics as the company made each incremental decision towards their cataclysmic event.


Tech and AI is no different. Spoken over and over again, from speaker to speaker, came the line, "With great power comes great responsibility." There's an understanding that AI can bring incredible change, but are we considering the communities it impacts and the potential repercussions?


Shankar Narayan, Technology and Liberty Project Director of ACLU of Washington, delivered a thought-provoking presentation on How AI, Data, and Surveillance Impact Vulnerable Communities (and How They Always Have).


During his presentation, Shankar shared a number of different ways AI can reinforce stereotypes or marginalize different communities. But the one that shook me the most was in regard to affect recognition and how facial/emotion AI systems can be biased and rooted in discrimination.


An example from Shankar's presentation, he presented the two NBA players below. On the left is Boston Celtics player, Gordon Hayward, and on the right Indiana Pacers player, Darren Collison. In a facial recognition test judging degrees of happiness and anger, the algorithm rated one of these men (A) 57% happy and 0.1% angry and the other (B) 39% happy and 27% angry. That's quite a statistical difference in the anger category. And yet despite all the cues we know as humans to indicate happiness and anger, like a tooth-filled grin, Hayward was rated with the former (A) and Collison the latter (B).


As another presenter noted, it's critical that we don't teach machine learning our human biases.


When thinking about tech, Shankar proposed, rather than wondering if it is tech good or bad, ask what problem are we trying to solve? By doing so, we can look at the challenges we face from a more ethical lens. Facial recognition software has incredible power. And, again like many of the speakers echoed, with that power comes great [ethical] responsibility.


4. We need to be proactive about who is in the conversation.


Just after the halfway point in the conference, we took a break from AI a smidge and looked at organizational culture and how that could help inform some of the decisions we make as communications and tech professionals.


Claudia Chang, a professor in the Communication Leadership graduate program and owner of Emerald Global LLC, an international consulting business, helped us as participants take a step back and look at the bigger picture.


A great piggyback off of Shankar Narayan's previous presentation, she encouraged us to consider who is in the conversation when thinking about the development of these emergent technologies. The representation paralleled between the team working on a product or service and our potential customer base is critical when thinking about the real impact, positive or negative, on any given community.


5. Stories, even ones about technology, are really about people. That's what gets other people to respond.


Videos speak louder than words and with this one I'll share two moving videos that Kerry Schimmelbusch, Senior Communications Manager at Microsoft, presented during her talk on how we as communicators help tell the stories and benefits of AI products and services.


In order to tell these stories, Kerry notes, "you have to be an investigative journalist within your own company," and you have to ask the developers the right questions. "Be humble and ask them to explain it to you like you're a 3rd grader." Simplifying language and understanding root problems and solutions is how we're able to bring these big ideas to the every day person.


VIDEO #1: Skype Translator: "We recently previewed Skype Translator to two elementary school classes—one in Washington and one in Mexico City. A few rounds of “Mystery Skype” was all it took for these students to discover the potential of Translator to break down language barriers and bring people together."




VIDEO #2: SwiftKey Symbols: "SwiftKey Symbols is a symbol-based assistive communication app for non-verbal individuals, available on Android devices. In this video, see how non-verbal students at Riverside School in the UK use SwiftKey Symbols to communicate with their caregivers."


 

Did you attend the event and have your own takeaways you'd like to add? Anything you'd change or remove? Let me know!


Added bonus: the event was live streamed on Facebook! Catch the whole thing here:

133 views0 comments
bottom of page