In an era where user experience is a deciding factor for potential customers, the onus on organisations has grown to find innovative ways of improving interaction with end-users through digital channels. This comes down to the quality of the software being produced: is it flexible enough to meet changing end-user demands? Is it agile enough to contend with varying degrees of traffic? Are all of its components necessary or do they hinder the customer’s journey?
With these new imperatives in mind, companies have come to the realisation that the more regularly software can be shipped and delivered, the better. The best companies will make small adjustments or improvements to software on an almost continuous basis, using user feedback as a guide to what’s working well and what can be improved. This process, known as Continuous Integration/Continuous Development (CI/CD), is key to ensuring digital tools always deliver the highest value to the end user by maximising their overall experience when interacting with a piece of software.
CI/CD contains two separate but complementary parts.
CI is the process of automatically testing and building software after new bits of application code are integrated into a shared repository. This yields “builds” of the application that are in a working state at all times. Unit tests are included as part of the continuous integration process, thereby validating the functionality of the software. This identifies bugs up-front, and prevents wasted cycles further down the feedback loop.
CD is the process of delivering applications created in the CI process to a production-like environment, where it is put through additional automated tests to ensure the application functions as expected when pushed to production environments and put in the hands of real users. It also ensures the latest build interacts with other software and applications as intended. Successful CD means builds are always ready to deploy to production, either via automation (a related process called Continuous Deployment) or a manual process like cf push.
Teams that practice CI/CD can release new application code to production in minutes, when it makes the most business sense to do so rather than based on predetermined release windows. With CI/CD, code is put through rigorous automated testing before it can be shipped, significantly reducing the risk of introducing bugs or broken code to production environments.
Creating continuous end-to-end value
1 in 5 businesses in the UK have not managed to deliver digital transformation projects, with the lack of understanding of technologies available, as well as where they should be incorporated, being cited as the biggest reasons. This is indicative of the tendency for stakeholders within a firm to look at value streams but neglect the people, processes, and technologies in play. Improvement requires attention on all three.
It is imperative for companies to understand the workflows from idea to production – or more importantly, in reverse, know the value customer buys from them, and the path they take through the business to arrive there. It’s important to create a real-time image of the entire system and find the bottlenecks that hold organisations back, before jumping into potentially futile localised improvement areas.
This means that before deploying CI/CD, companies need to investigate and document the current state, including the teams involved, the gridlocks in the workflows and the entrance and exit criteria at each stage. Where outdated or manual steps are being used, the introduction of technology may cause more harm than good and negatively impact business outcomes.
Building out the application delivery pipeline
Digital transformation isn’t about shovelling more features and apps into the market. It’s about changing the relationship with customers through useful software. That means the relationship involves more than shiny new things. It’s reinforced through a reliable, secure, cost-effective set of services.
Continuously delivering on underlying platforms is crucial to achieving this. Taking major downtime during quarterly upgrades and leaving mission-critical apps unpatched can hinder the platforms reliability and security. Moreover, large teams doing intensive manual management of multi-site platforms keeps platform costs high, thus making it harder to pass savings on to customers. There’s a better way.
Organisations across industries are increasingly putting their platform onto pipelines. That means continuous updates (without taking downtime), immediate patching when vulnerabilities emerge, and a complete hands-off approach for system upgrades. This can result in improved reliability, better security, and lower costs.
Not forgetting the small stuff
It’s not possible for companies to go fast whilst simultaneously improving stability. Make no mistake, that’s not a trivial accomplishment – it’s hard to do. Yes, deploying small changes frequently means having a smaller change surface and simpler debugging. But complex change processes, brittle architectures, and thin infrastructure APIs all make it hard to continuously deliver software and platforms.
Companies need clear change processes; devoid of review boards and heavy on automation. They also need a resilient architecture that can tolerate rolling upgrades to compute and storage and infrastructure APIs that make it possible to automate all the necessary provisioning, de-provisioning, and configuration activities. The key to all of this is deeply understanding what customers need and staying laser-focused on the desired business outcomes.