Why Tech Marketers Should Care About M3gan

It’s an image that has persisted in the cultural consciousness for more than fifty years: the unblinking red eye. Always surveilling. And ultimately‚ with that emotionless, harrowing monotone, saying:

I’m sorry, Dave… I’m afraid I can’t do that. 

2001: A Space Odyssey

The AI takeover has been a heavily featured idea in science fiction since the 1968 release of Stanley Kubrick’s 2001: A Space Odyssey. Since then, pop culture has introduced us to a variety of AIs, both good and bad: from R2-D2 to JARVIS to WALL-E to Her (aka Samantha). 

As brands continue to launch their own AIs in the market, it feels like these seemingly sentient algorithms are stepping off the page and out of the screen into our lives.

And now HAL 9000 meets the idea of the killer doll in the 2023 dark-comedy-horror-thriller M3gan. Though part slasher film and part campy romp leaning into possible cult film status, there’s also buzz already about the AI part: How comfortable are we letting technology raise our kids—is it iPads today, AI tomorrow? Can we trust the smart tech we bring into our lives? How much do we really want to rely on AI for the things that matter most?

If you’re building or branding an AI experience, at least one of those questions is likely relevant to the task ahead of you. The AIs we create and meet today are not built in a vacuum. Examples and narratives from pop culture shape our expectations for—and therefore our reactions and interactions with—AI. That’s true for all of us—the developers, designers, product leads, and marketers bringing these entities to life as well as the target consumers. 

So, with this in mind, we can look at what brands can learn from the familiar examples that have so deeply influenced our understanding of human-AI interaction. Consideration of key examples, lessons, and questions equips us with a better chance to add something new and positive—to not only the market landscape, but the entire conceptual field of what an AI experience is and can be.

HAL & Other AI Nightmares: From A Lack Of Control To Customer Command

HAL 9000 (1968) and M3gan(2023) are in good company: Skynet from The Terminator (1984), The Matrixagents (1999), GLaDOS from the Portal video game series (2007), even the more recent and nuanced Ava from Ex Machina (2014) and thehosts from the popular TV series Westworld (2016-2022)—all the way back to the man-made intelligences that revolt in Karel Capek’s 1920 play Rossum’s Universal Robots (R.U.R.). We have more than a handful of villainous AI depictions in our collective consciousness.

For the last century, these AI villains featured in pop culture are shown in a single primary narrative: humans suffer when an AI wrests control of the circumstances and prioritizes something other than human welfare. Overwhelmingly, negative pop culture portrayals of AI center on fear of a lack of human control.

It’s worth noting that AI as a concept itself has become superficially fear-inducing in very large part due to these depictions. 

For users to feel comfortable with AI, brands should strive to emphasize an aspect of customer commandin any AI experience. A clear user-centered quality to promote the customer’s pervasive sense of control in every interaction—in both form and functionality throughout the experience—is key to avoiding the most pervasive and potent fears about AI to become a differentiated, positive example.

In the market today—in the same vein of companionship as the M3gan robot—we find Replika, an AI companion (now app) that grew in popularity over 2020 and remains popular. Replika’s interesting for a lot of reasons—not least because she shows you, on request, everything she’s learned about you while you’ve chatted with her: e.g. You’re a consultant or You enjoy autumn. Unlike the characters’ interactions with M3gan on screen, you can delete any one of these memories—and can easily leave the experience or delete your Replika and/or your account at any time. These are simple but vital features that contribute heavily to a sense of customer command.

In another example—and in the realm of financial services, where the customer’s sense of control is perhaps particularly sensitive—Capital One’s Eno provides a sense of customer command through two routes. First, through the language that frames the experience: Eno is positioned as helping you, here for you, connecting you to your spending, all at your request. That user-centered positioning is a key step in driving home the idea that the AI will never prioritize anything other than your priorities. Second, the customer has the option to call upon Eno wherever the customer would like to—in the app, on the web, or even via SMS. It’s a good example of even small tactics that can go a long way to helping the user feel in control of every interaction.

Ask yourself: How might you design the experience in a way that puts users clearly and constantly in the driver’s seat?

TARS and Samantha: User-Centered Personalization That’s More Than Skin-Deep

We all know that personalization is a now longtimebuzzword, but we also know people still want it—and it seems more attainablethan everwith AI. Now we’re aiming for AI-enabled interactive personalization, 1:1 personalization, and hyper-personalization.  

But what do people really expect from AI-powered personalization?

Certainly more than getting to choose your AI chatbot’s hair color. The child in M3gan pairs with the AI in a way that speaks to muchdeeper personalization. Users expect more relevant content from the algorithms we’ve come to know, but there’s also a real risk of hittingcreepy. (See above: customer command is vital!) But what else do we see about personalization in pop culture that’s deeper—and still emphasizes that sense of customer control?

What about personalizing the way the AI interacts with the human in the first place or the fundamentals of the AI-human interaction—the AI’s personality?

The AI-robot copilot TARS from Christopher Nolan’s 2014 film Interstellaris a great example. The human pilot can verbally access the AI’s personality parameters—like honesty, discretion, and humor—and modify them instantly and easily, based on their preferences for the sort of copilot they’d want. 

In the market today, there’s Youper, an AI mental health service app. Youper asks initial unique, engaging questions about the user’s goals, and then frequently checks in with the user’s mental and emotional state paired with a user-control module selection to adapt the service to the user. Youper achieves that difficult balance between the critical feeling of user command and personalized guidance tailored to be most helpful and engaging for the user.

And then there’s the weather app CARROT, which is explicit in allowing the user to choose, on a sliding scale, the AI construct’s personality: from Professional to Snarky to Overkill, even including options to select its political leanings (from apolitical to communist to anarchist, with all the imagined options in between). The user also has the ability to customize their dashboard and how information is visualized, balanced by a smart layouts feature that recognizes the user’s preferences and adapts to current weather conditions. It’s wildly engaging—even when, at the end of the day, what you’re looking at is weather information. 

Ask yourself: How might you harness the potential of AI to create the ultimate individualized user-centric experience?

R2-D2 and BB-8: The Form Factor Matters

We’ve known for a while that there’s a point at which being humanlike, but not quite human, becomes creepy. The embodied AI in M3gan dances on that line beautifully—a robot, but part of the family; funny, but obviously intelligent and vastly capable.

Some of pop culture’s most beloved synthetic intelligences, the Star Wars droids R2D2 (1977) and BB-8 (2015) take a different approach. Both droids have geometric, distinctly non-human forms and rely primarily on forms of expression other than human verbal communication. They’re also highly competent, demonstrating clever resourcefulness and saving the human protagonists’ lives on more than one occasion. WALL-E (2008) takes a similar approach and is perceived as playful and harmless.

It’s interesting to consider that the highly humanoid C3PO, another beloved droid, is often portrayed as incompetent. Perhaps there’s something to the idea that a well-liked, trusted AI can be humanoid or competent—that there’s a tradeoff to consider to achieve likeability and trust, in contrast with the Avas, Dolores Abernathys, and M3gans of the world.

In the market today, brands are actively negotiating this balance—how humanlike, or not, to design and describe their AI products to feel?

Ask yourself: How might you pick the right form factor to reflect the unique strengths of your AI in the context of your brand—while being aware of the most common pitfalls?

Learn From Those Who Came Before—And Are Still Out There

When it comes to building and branding an AI experience, it’s critical to recognize that you’re never starting from a blank page. Alongside other products on the market—and likely even more top-of-mind for consumers—is the century’s worth of pop culture that shapes our perceptions of this cutting-edge tech finally coming into our actual daily lives. 

Customers very well may mention WALL-E, Samantha, or even M3gan as comparison points while talking about your product at the dinner table because those are the reference points we collectively have, and those are the comparisons that will inform perceptions and interactions with your brand experience, whether your team has considered them or not. So strange as it may feel to bring up the latest in kitsch horror at the board table—better to consider sooner rather than later what comparison points are out there, the right questions to ask, and the principles to incorporate to be able to introduce a differentiated, positive new example of AI into the conversation.

M3gan Image courtesy of Universal Pictures

Exit mobile version