For most companies, the answer is no.
Industry leaders in every sector are racing to adopt sophisticated systems that promise the introduction of AI and ML into their organizations. Executives often get sold on the concept of a technology that will give them the ability to see things that humans can’t, making suggestions or changes to business that can unlock new opportunities and drive major efficiencies.
Although the opportunities are definitely there, and in some cases so is the technology, there is a major disconnect between the possibilities that AI and ML are presenting compared to the operational readiness of most organization’s data environments.
“AI is like sex in high school. Everyone says they’re doing it, very few actually are, and those who are, are doing it wrong.”
The reality, though, is that a data warehouse and an application layer represent the beginning and end of a process, and what’s usually missing in the middle of this process is a pretty sophisticated stack that transforms raw data at one end, into usable information at the other. The problem is, not only is this “in-between” layer a difficult problem to solve, it’s also a major unknown for many decision makers setting IT strategy, budget, and timelines.
“If Watson can win Jeopardy, it must have the cognitive ability to wade through my data and develop insight, right?”
Get all the data together — Check.
Purchase applications that turn raw data into insight — Check.
The concept here is that once we get all the data to a common place, we can plug it into whatever we want. Simple. The problem is, the data coming out of the warehouse(s) lack a common standard, and the applications sitting on top of the data can’t be fully implemented because…
Even the best software needs good data.
At the end of the day, any application, automation, or AI/ML technology needs to be fuelled by refined, standardized data in order to function to its greatest capacity. Without standard data, the game changing tech you wish to implement that promises to revolutionize your business will be limited to the lowest common denominator of standard data it can gain access to. Usually, that means an extremely limited amount of information even if it is now all coming from the same place.
Data needs to be refined into fuel and delivered in needed quantities to the applications it is tasked with powering. A system that does this, or better yet, a data refinery process that checks all of the required boxes, looks like this:
A variety of data means a variety of problems.
This turns most data scientists into data janitors, and it’s also is the reason most enterprise apps will never fully reach enterprise deployment.
“The problem isn’t lack of direction or ability. It’s lack of refined data.”
This may seem like a huge leap forward, but in reality, it’s very much attainable. In fact, many of the pieces required are laid out in the above graphics. On your own, it’s not a quick or easy build, but it is a necessary problem to solve if you are truly thinking from an enterprise perspective.
The takeaway here is not to stop looking at new technology that can help unlock huge potential. It’s to remember that regardless how shiny and amazing new solutions are, they all require rocket fuel to help them blast off. When we talk game changing enterprise apps, the rocket fuel is standardized data. And a process to refine it needs to be implemented in parallel with the adoption of new Business Intelligence tools and applications.
Read more about how we're delivering clean libraries of data to drive AI tech.
Want to learn more about aligning your business and data strategy?