Five data trends set to dominate 2021

483 Views

There is a sense that, having enjoyed a relatively stable period of prosperity following the 2008 financial crisis, we have seemingly been hit by one crisis after another in the last few years. It is the nature of our interconnected world – what once might have been a news item about the other side of the globe is now having a direct impact so many organisations.

In such an environment, no one can truly predict what is going to happen. Yet they can be prepared for these transformative events; ready in such a way that they could even thrive on anomalies in a way that the competition will struggle to keep up with.

In what economists refer to as a K-shaped recovery, what the past year has proven is that those enterprises that have committed to being digital are best placed to adapt, and even thrive in whatever comes their way. In doing so, they are able to both react and pre-act, with digital as the driver that allows them to switch at will. For the rest, they need to pivot now.

But what enables this digital switch? Data and analytics. And some trends have shifted in imperative from gradual to immediate. Being able to identify and accommodate those critical data trends, or pivots, is closely linked to being able to use it effectively. So, what are these pivots, and how will they affect the market, and indeed enterprises themselves?

  • SaaS is everyone’s new best friend – Cloud computing has been one of the major lifelines of 2020, helping many businesses keep the lights on in virtual environments. Where once there was reticence to invest heavily in cloud and other as-a-service solutions, now many are embracing the approach, benefiting from scale and elasticity, as well as fast access to the likes of augmented analytics. This trend is going to continue, with a greater migration of databases and applications from on-premises, legacy infrastructure to cloud environments. In turn, this will drive a need for technologies that can access, move and harmonise data from multiple places. Containers and serverless infrastructure hold great potential for running applications in the cloud, but using them at scale requires significant organisational maturity and know-how.

 

  • Self-service has evolved to self-sufficiency – Compelling user interfaces are no longer a nice-to-have, but an imperative. At the same time, it is not a given that users always want to self-serve; increasingly, they want insights to come to them. As a result, we’ll see more micro-insights and stories for the augmented consumer. This will also help overcome the all too frequent issue of data being overlooked. Empowering users to access data, insights and business logic earlier and more intuitively will enable the move from visualisation self-service to data self-sufficiency. Artificial intelligence will play a major role here, surfacing micro-insights and helping us move from scripted and people-oriented processes, to more automated data preparation and analytics. If data self-sufficiency can occur earlier in the value chain, anomalies can be detected sooner, and problems solved faster.

 

  • Shared data, visualisations and storytelling are consumed by the masses – Now more than ever, we’ve seen the importance of delivering the last mile in data storytelling and infographics. There has been a massive up-levelling in the conversation around data. This development help millions of people on the journey toward data literacy. But data is too often becoming politically fraught. How do we double-click beyond the picture? Get to the point behind the data point? Surface lineage and easily bring in new data sets? Technically, an expansion of context will be supported by more common data models and more business logic, accessible in catalogues and data marketplaces.

 

  • Up-to-date and business ready data are more important than ever – Since the pandemic arrived, we’ve seen a surge in the need for real-time and up-to-date data. Alerts, refreshes and forecasts will need to occur more often, with real-time variables. On a macro level, we’ve seen disruptions to supply chains, with hospitals scrambling to procure PPE and consumers stockpiling toilet paper. Surges like these are accentuated in a crisis, and we have to build preparedness for them into operations. As the velocity of data increases, the speed of business needs to follow. Can we make “business-ready” data – information that is not only curated for analytics consumption, but which has timely business logic and context applied to it – accessible earlier? And can we automatically trigger either automated or human-based action?

 

  • Advanced analytics need to look different – In the wake of COVID-19, there has been an increase in interest in advanced analytics. But in uncertain times, we can no longer count on backward-looking data to build a comprehensive model of the future. Instead we need to give particular focus to, rather than exclude outliers. We saw this in the results of the A Level exams in England, where an algorithm was used to determine scores, and cemented existing trends while locking out outliers. Simulations introducing unexpected inputs don’t predict the future, but they can reveal how a system will react to the unexpected. What-if analysis presents options upon which we can build contingency plans, while AI will increasingly reveal anomalies outside preconceived hypotheses, which can then be evaluated by humans.

 

As disruptive events become increasingly common, enterprises should be looking at the lessons of 2020 and applying them to their own organisation. That means accelerating their digital transformation and making sure they have data and analytics at the heart of it. Only through doing this will they have the capacity to react more quickly, read signals more clearly and outline options for action. It’s a matter of survival, certainly, but in the right hands it can also be an opportunity to thrive.