If you flew commercially in the 1930s, there was a decent chance you were going to have a bad time.
Airline pilots, not yet accustomed to the duty of transporting everyday, non-aviating people, tended to fly without back-seat comfort in mind.
Dicey ascents, white-knuckle turns, and high-altitude nausea were commonplace passenger experiences, ones that threatened to keep this new idea—and the technology underpinning it—permanently grounded.
But even then, commercial air travel was still a safe way to get from point A to point B. It just didn’t feel that way. So the question became: how do you convince millions of people that this new, scary technology was not just safe, but the future of travel? The answer is: you invite people to address the concerns of people.
For the airlines and their proponents, that meant:
Technology wasn’t going to ease people’s fears of technology. It had to be people. And these measures were a deliberate result of listening to people (and their screams), addressing their concerns, and ultimately improving their experiences to guide both them and airlines to new horizons.
The advent of AI feels like those early days of commercial airlines. AI is poised to be revolutionary, but people are still skeptical, and even afraid. It’s this new power they don’t quite understand. And just as commercial aviation opened up the world to people, AI promises boundless possibilities for how we think, create, and work.
But when you’re given the ability to do anything, it’s just as likely that you’ll end up doing nothing. That’s not a technology challenge. It’s a people challenge. To make the best use of AI, and to overcome some fear along the way, we need to again put people at the center. By that I mean giving real people meaningful involvement in the AI adoption process. Because that’s what builds transparency, trust, and comfort.
People don’t just do things robotically. It’s not our nature. We instead wonder why we’re doing them and what value there is in doing them. This is a critical component to dealing with any technology. So, if you are considering implementing AI, the first place to start is by asking yourself a fundamentally human question: Why?
That question of why can take a number of forms, all of which are worth asking. They could be:
This helps anchor AI strategy in purpose (the need for purpose being another human trait), instead of just chasing trends.
We’ve established the importance of humans in the process. So the next area to tackle is how they are leveraged. The choices you make with your technology will affect customers and employees (real people). But these same people can also affect your choices. That’s why it’s important to turn to them when deciding the best route to take. Go to these real people impacted and learn about their needs and concerns. Enlist their help in creating the solutions that will drive these experiences. Offer the opportunity for them to evaluate and suggest improvements.
Your AI strategy can’t be done in a vacuum. Co-creation with people for people is a step that is too often overlooked and where companies fall flat. Doing this gives people agency, engages them with the technology first-hand, and amplifies your opportunities to use AI in the most suitable ways.
Finally, any AI implementation or use case won’t be a one-time project. Think about how far AI technology has come in the last year alone. That rate of change isn’t slowing down anytime soon. The technology isn’t going anywhere, and it will keep evolving, which means you have to evolve with it.
Two ways to ensure continued growth is to create regular/dedicated spaces for people to ask questions, raise concerns, and share successes with your AI and encourage team learning. In doing this, you’re also building relationships (another human need)—with each other regarding this strange new world, and with the technology itself.
Articulating your “why” and getting stakeholders involved isn’t an easy or one-directional process. The right answer depends fully on the situation. So it’s OK if you don’t have all the answers. That’s why consultancies like Studio Science exist. My fellow strategists and designers deal with the fuzziness and ambiguity that naturally happens when connecting humans and technology on a daily basis.
Things are moving so fast that it might feel like you don’t have time to take these steps, but doing them won’t slow you down. In fact, they’ll actually help you prevent missteps by aligning AI with your core strategy and business goals—and help your team collectively align with AI along the way.
And if you find yourself at 35,000 feet in a 50-degree bank, technologically speaking, don't panic. We're here to help.
As a Senior Strategist, Rob is a people-centered design researcher helping brands create meaningful solutions with strong skills in design thinking, design leadership, visual sensemaking, design facilitation, and ethnography. Rob uses design thinking to identify and solve problems in a variety of contexts and believes that people know their own problems better than anyone else.