Skip to content

3 Things to Consider When Adopting AI

3 minute read

By Simon Kriss, Chief Innovation Officer for the CX Innovation Institute

Fellow Star Wars geeks will remember a wonderfully simple yet poignant scene in The Empire Strikes Back (subsequently renamed to Star Wars: Episode V).

Luke Skywalker’s X-Wing fighter is stuck in the mud after crash landing on the planet Dagobah. Whilst training in the ways of the force with his Jedi Master, Yoda, Luke is asked to lift his fighter out of the mud. To this request Luke replies, “I’ll give it a try”.

At this point, Yoda says the most impactful line from just about any movie ever made. He looks at Luke and says “No. Try not. Do, or do not. There is no try.” So simple and yet so powerful, and never more important than in the pursuit of AI.

AI IS EMPIRICAL

Let me say that again. AI is empirical. Organisations cannot just research their way through AI adoption, they need to go ahead and do it. Use will derive more uses.

To quote Yoda, “There is no TRY”.

However, taking an empirical approach to any new business venture feels risky to many businesses, and rightly so. Leaping in with both feet takes nerves of steel, which most organisations simply do not have (or cannot afford to have).

SO, HOW DO YOU STRIKE A BALANCE?

Here are three things to consider that will help make the empirical journey a little more palatable.

  1. Create a failure tolerant culture.

I am not talking an entire organisational culture change – that takes years. I am talking about securing executive support to create a failure-tolerant culture for your AI adoption team. Make it OK to fail fast, recover and move on. Be prepared to celebrate mistakes as much as wins (if not more).

Remember when the Space-X team cheered as their rocket exploded? That is a failure tolerant culture!

  1. Know the reason why.

Many organisations are jumping into AI through a product-lead approach, and sadly many of those will fail. Maybe not in the first 3 months but at some point, it will catch up with them in terms of economic loss, brand degradation or regulatory breach.

Don’t let the shiny new software you saw at a conference be your guiding light. Take the time to figure out the full use case. Who will use it, how will it be used, how might a ‘bad actor abuse it, what is the expected ROI, what are the risks, etc.

Make sure that when the CEO asks why you chose this use case first, that you can answer the question competently and convincingly.

  1. Proof removes scepticism.

Expect people to be sceptical of new technology, especially something as confronting as AI. Always do a proof of concept (PoC) and make sure you learn something from it. PoCs that are perfect in every way are a farce. Push it, break it, troll it, use ‘the force’ if you must. Then once you have been robust, share it with the wider organisation.

Show everyone your journey, your crashes, your scars, your wins and, most importantly, your proof that it works and delivers a ROI.

If you can’t prove ROI then kill the PoC, celebrate the loss, and go back to the drawing board.

FINALLY

Responsible AI adoption is not a 2-week sprint, neither is it a 2-year marathon. Plan your journey properly and then… listen to Yoda!

ABOUT THE AUTHOR

Simon Kriss is the Chief Innovation Officer for the CX Innovation Institute. He works with Boards, Executives and Leadership teams on Responsible AI adoption.

Simon is the author of “The AI Empowered Customer Experience” and hosts podcasts on CX and AI.

Keep up to date with the latest events, resources and articles.

Sign-up for the Engage Customer Newsletter.