5 Ways You Are Sabotaging AI As A Leader
In recent weeks, with various tech announcements, we have started to hear shouts of the AI bubble bursting.
Prior to that, we got the long overdue phrase "AI washing" to name the inevitable trend of companies including the term Artificial Intelligence all over financial results and startup pitch decks just to capitalise on the buzz.

With growing skepticism that AI can deliver on the promises made, scrutiny will turn to the technology, the capabilities of the tech and those that built it.
I'd like to shine the spotlight on the Business leaders, founders, CEOs and decision makers, their role in the success or failure of AI investment and the mistakes that I have seen over the past 2 years that have brought us here.
Let's start with the big one, shall we?
1. Prioritising AI for AI's sake
This happens with every single technology hype. I call it "shiny object syndrome" and it comes hand in hand with FOMO. We want the next big thing, the most sophisticated technology, to keep up with the industry. And we don't slow down to think about what it is for.
The number one reason that AI projects are failing is because they were prioritised for the purpose of having AI projects instead of starting with the business or customer problem or a real need. If you start with a solution and work backwards it will never be as effective or successful as finding the right tool for the job. AI is popping up as a standalone priority from leaders that don't know where it should sit and aren't asking themselves if they need it at all.
There is an element of "peer" pressure in this too. If your peers include your competitors and the question "what if we fall behind because they have AI and we don't" or your shareholders who want to see the financial results they have heard AI delivers. I'm not going to say these are easy to ignore but I am confident they are best tackled with going back to first principles of what your business is trying to achieve and thoughtfully mapping out the best route to get there.
I think this peer problem is most amplified in Big Tech. We have seen release after release after release from the main players and not all good! There is always an added talent war with this group as they want to put themselves visibly in the game to attract in demand engineers they believe they will need to compete down the line. In the short-term, their users suffer by getting a stream of features they never wanted as the CEOs show off their latest toy.
I laughed yesterday when I saw Cassie Kozyrkov published this article, "Start with Why AI" , as I am scheduled to speak on a podcast later this week themed "The Why of Data and AI". There is no company I would rather be in than with Cassie on this one though!
If we aren't asking the right questions then I promise we won't get the right answers. The questions leaders should be asking are:
What are the key priorities in our business strategy?
How does this technology deliver business and/or customer value?
Is it the best option we have to do so?
2. Assuming that just because the tech is new the situation is too
As I said – the last mistake happens with EVERY technology hype. And yet?
Leadership choose to treat AI as the new new. Different to anything that came before it. And actively ignore all the learnings they have from previous technology disruptions.
I'm not saying there isn't anything unique about AI and Data products, there are certainly specific considerations that teams need to equip themselves to handle. But at a leadership level we have so much insight from how customers, teams, competitors etc. handled shifts in the past.
Human behaviours repeat.
If we can look back at what kind of barriers to adoption we had when we introduced employee systems in the past, how little control we might have had when social media appeared on the scene, the operational challenges to getting revenue from less sophisticated machine learning initiatives and so many more. We can apply those learnings to what might happen with AI and how we can set ourselves up for success.
This seems to actually take place most at a macro level where we see the graphic warning that the "trough of disillusionment" is on its way or we compare the dot com bubble to what we are seeing now.
Leaders have a responsibility to add these reflections to an understanding of their organisations if they are to take real action to change the outcomes.
3. Treating this as just a technology change
Understanding your organisation means understanding people.
At the core of any meaningful change is People.
As Sol Rashidi, author of "Your AI survival guide", warns – we need to shift our expectations from focusing on the algorithms and the big data to realising most of the work and most of the success are about the people and the plan.
If AI is really a transformation for your business and one that reaches all aspects of your operations then this is a Change management initiative that should be treated as such. Digital transformations fail all the time (See mistake 2!) and the main reasons are that we, as leaders, do not seek to understand the impact on individuals or the level of real buy-in that exists.
Employees need to understand how their roles and expectations of them are going to change. They need to understand your intentions and why they are what they are.
They need to understand "Whats in it for me".
Whether we roll-out AI in external products or internal processes we need to consider the impact to people at every step and the behaviours that might block our plan.

4. Fixating on people like them
Overemphasising the opinions, preferences and actions of "people like me" is a recipe for disaster.
Particularly when dealing with new technology we have to be aware of echo chambers we exist in.
If I am a CEO that spends my morning commute listening to the latest tech podcast or reading blogs by AI experts to stay ahead of the curve, then- more likely than not – I am not my customers or my general employee.
Not everyone has the same awareness of, interest or trust in AI.
When Meta rolled out different AI features in Facebook groups and Instagram search, the online discourse immediately asked "who wanted this?". There are memes about the changes that users actually want from these platforms but instead they got confusing AI integrations that didn't solve any obvious problem. And created some new ones.
On my Linkedin about 6 months ago, I asked in a poll how many people were using AI in their daily jobs. It was close to 90% but my career, and so my network, skew Data Science and Tech. My family members only learned about Chatgpt a few months prior to that and the majority of people I went to school with have not even tested it.
People commented "What age are those that said no?" and, while small data, I couldn't see any correlation to demographic. Different functions in your company will be more and less exposed to the latest tech. Different roles will have more resistance to what it means for their future. Attitudes can be changed drastically by one bad experience so even those you would expect to be all in might not be.
Don't assume and don't fall into the trap of overrelying on your own circle.
Poor decisions that leave people, customers or employees, behind can be the biggest sabotage of all.
5. Ignoring data dependency in favour of a quick fix
I've spoken a lot about people problems and that emphasis is intentional as these are the role of Leadership and genuinely some of the biggest mistakes being made.
But I'm adding one more as I believe we, data professionals, all have a responsibility in building literacy and managing expectations with leadership on this topic.
AI is nothing without data
You can't skip this step.
And in the world of open source algorithms this is your advantage, so why would you want to?
Everyone wants to see the fastest possible returns on their AI investment and the reality is, that return might be slowed by spending time on data governance and infrastructure initiatives. But they are necessary.
We need to take the adage "Garbage in, Garbage out" out of the data teams and into the boardroom.
If you, as a leader, are not committing time and resources to a data strategy then this will be obvious in your AI strategy sooner or later. I've written a full article (Does your company have a data strategy?) on this so I won't dwell on it here.
Google's partnership with Reddit gave us the best evocative audit (Hat tip to Joy Buolamwini for that phrase) we could have hoped for in proving this out on the biggest stage. They spent $60m to get "more" data to feed their Generative AI for search and the results were just as weird as you would expect if any random person can answer any random question and no verification takes place.
Leaders: Don't be the person that pushes your team to skip the data step.
Data practitioners: Work to educate your leadership and business stakeholders on the risk associated with this. Maybe share this article