Why is integration so often an afterthought?

HeartOn their own a heart, lungs and a liver would be pretty useless. But when you join them up using the vascular system (arteries, veins, capillaries), all of sudden the sum becomes greater than the whole. Each of our organs has a “specialist role” and when they perform to role, the work they do is invaluable. The heart pumps blood, the lungs oxygenate blood, the liver helps you recover from a big night out.

In this admittedly sketchy analogy, IT systems are no different. Each IT system in your business has a specialist role which it (hopefully) does very well. Most businesses will have a financial system (invoices, accounting, etc.). Some will have point-of-sales systems (POS), some have a sales or contact management system (CRM), most will have an email system. Many, possibly even most, will have a core line-of-business system which helps manage or control the primary business function of the company.

All too often there is no vascular system connecting all of these disparate applications together. Each system operates using its own data set “knowing” things which it is not “sharing” with other systems. Has Customer A paid us in full? Does Customer B have any open queries? These are pieces of information which users across the business could, and should, use to make more informed decisions.

Why are businesses not building their vascular systems?
Why do so many businesses not focus on linking up their IT applications?

 

Is there a fear that integration is too expensive?

Possibly. Traditionally the software that connects up applications – known as middleware – was expensive. It required specialist infrastructure and a lot of work just to get the application talking to the middleware.

Two market forces have arisen to challenge this. Firstly, in the era of cloud applications, the only infrastructure required to run modern infrastructure is often a data connection. Secondly, the advent of common APIs means that connecting to many applications has become much simpler.

Is there a fear that integration is too complicated?

Possibly. Connecting system A to system B requires some degree of planning. It cannot be denied that there will be some work to line up data fields in either system and to ensure that business processes on either side will still operate with data being added/updated.

Using templated connectors and a proven, rigorous, analysis means that the complications can be minimised. Most businesses are not in the business of running complicated IT projects. Using integration templates, supported by repeatable analysis for non-standard data, means the complications can be quickly resolved.

Is there a fear that integration will take too long?

Possibly. For a myriad of reasons, IT projects have a reputation for taking longer than expected. Most companies are not in the business of running long IT projects, they take energy and focus away from day-to-day operations.

Using a well designed toolset which has a demonstrable ability to rapidly deploy integrations removes a lot of the risk. Templates for many common applications, coupled with underlying foundation connectors for common integration scenarios, mean that any development is fast-tracked before it even begins.

Does it have to be expensive, complicated or time consuming?

Of course not. Although admittedly, if the answer were yes that would be a pretty surprising take for this post!

For all too long, sophisticated integration has been the preserve of large enterprises enterprises who could justify the cost, and associated complications, with managing traditional options. Cloud options and the uniquity of APIs has drastically reduced the cost profile of setting up and operating integrations. Complications can be controlled by using a reliable toolset. And the time to deliver can be accelerated by using re-usable components.

With the introduction of reliable, cloud based infrastructure-as-a-service options, coupled with well designed tooling, sophisticated integration is accessible to companies of all sizes.

Walking the walk

Walk the walkHow do you tell the difference between “walking the walk” and “talking the talk”? We’re delighted whenever we can meet a less typical client requirement that also enables us to demonstrate different aspects of the Seamless offering.

In this post, we’ve chosen to share the stories of 3 recent implementations that demonstrate different aspects of how Seamless offers flexibility when delivering application integration.

We’re not looking to integrate Microsoft apps

You’ll not be surprised to learn we’re big advocates of Microsoft, but they’re not the only app vendor out there. In a recent implementation, client is using Seamless to keep a pair of helpdesks in sync, one of which runs on BMC Remedy and the other on ManageEngine.

This implementation demonstrates how we can use Microsoft Azure to enable digital transformation outside of the Microsoft realm. Remedy and ManageEngine are both commercial applications from different vendors. We’re using Seamless, running in Azure, to keep them in sync.

We’re not just connecting two applications together

We have a client rolling out Dynamics 365 into an existing architecture which contains an Azure service bus for transporting data across the enterprise. Seamless has been setup to sync data to and from the service bus.

This demonstrates how the componentised nature of Seamless allows us to deploy into more complex contexts than simply syncing between System A & System B. In order to sync to/from a data bus, we only use half of a typical implementation. In turn, this means that we needed to setup some Seamless’ internal data transfer objects differently. The configurability of Seamless meant we were able to achieve this within a few days of joining the project team.

In this instance, we’ve deployed into the client’s Azure estate rather than our own. This enables the client to take advantage of their own existing security infrastructure within Azure, as well as their pre-existing preferential commercial terms from Microsoft.

We’d like to use some of our technical skills

Syncing contacts and accounts between Dynamics and Autotask is bread-and-butter stuff for Seamless, so why is it being mentioned here? In this example, our client, Verdant Services, a consulting and cloud services provider, is configuring their own integration.

Verdant are using the Seamless Integration Workbench to specify data mappings, update transformations and generally take ownership of their integration. As the project progresses, they’re also able to quickly change their integration to move data between various staging and live environments.

This demonstrates the power of the Seamless Integration Workbench. With a few hours training, our client has taken ownership of the integration inside their software delivery project. This enables them to be flexible and respond to the needs of the project as it progresses.

Tomorrow’s IT operations are as important as today’s software delivery

When we set out to build Seamless, one of the things we focused on, quite obsessively, was that we should reduce risk in a software delivery project. Admittedly, this is a reflection of the backgrounds of our founders: delivery of software. It also vastly undermines the value of delivering Seamless as a managed service. Perhaps we’ve done ourselves a disservice?

Erm – let’s not get ahead of ourselves, have we actually addressed delivery risk?

In the first instance, I’d like trumpet our engineering a little bit. If we need to connect to a relatively modern API (think RESTful or SOAP), we typically have data syncing in under a week. Once we’ve “proved the pipe” (demonstrated that we can move data between source and target), we can shape that as necessary to meet the business requirement. As this is readily demonstrable, the delivery risk is greatly reduced. There is no longer a long period of writing code and hoping for the required outcome.

Getting there is only half the battle

As old hands at “change the business”, we may have fallen into the trap of forgetting about “run the business”. If you’ll forgive the cliche, delivery of a new software solution is the beginning of a journey rather than the end.

A data integration is a living piece of infrastructure that needs to be operated. Even if there are no changes to the business requirement, integrations need to be looked after and cared for.

  • Is the integration able to connect to both source and target?
  • Is the volume of data being moved roughly in line with expectations?
  • Does the integration have enough oomph to ensure that data is sync’ed within the required timeframes?

The Seamless monitoring features, including meaningful alert content, means that keeping tabs on these questions becomes part of the regular operations monitoring regime. Additionally, alerting on exception means that you can assume all is well until told otherwise.

Integration issues are often a symptom of broader problems

One thing we’ve observed is that often issues with the integration are just symptoms of broader problems. Our monitoring is proactive (we go looking for issues) and our reporting is sufficiently detailed that we’re able to help Seamless users spot and trouble-shoot underlying issues, often before they’ve noticed them themselves.

  • Are both the source system and target system accessible? If Seamless is unable to reach systems, is this because of an underlying systems issue that needs to be addressed?
  • Are there data mismatch issues between source and target? Often this is a case of changes to reference data in one system or another.
  • Are data volumes completely outside of the expected data volumes? If so, is this driven by a something in the business (e.g. a sales promotion) or is one of the systems experiencing an underlying issue?

In 2017 we had only one service availability issue. Sure – we’d like it to have been none but, if there’s going to be some downtime, let’s at least handle it elegantly.

In July 2017 there was a short-lived Azure UK connectivity issue. Our alerting meant that we knew it was happening before it was confirmed by Microsoft and we were able to manage expectations across our client base. In turn, this allowed clients to manage expectations within their user community. We also provided the option of redeploying to another Azure geography. Thankfully, the outage was contained to a few non-business hours (for our UK customers) and there was no need to exercise any of these options.

The underlying Seamless service remained live and, as connectivity returned, any data had been “caught out” between source and target was quickly sync’ed by the service.

The only rule of business requirements is that they change

One of our first prospects for Seamless was a company who, as a result of an acquisition, found themselves with two software platforms for managing helpdesk tickets across their considerable IT estate. As part of the merger process, a vendor had written a point-to-point integration which sync’ed relevant tickets back and forth.

Six months down the line, there was a need to change the rules for which tickets would sync between the apps. After months of avoiding the question and 3 days of consulting work (on fees), the vendor quoted tens of thousands of pounds to change the integration. These were well known, well documented apps with mature APIs, updating the integration should’ve been a piece of cake. It became easier to change business rules inside the company than to change the integration and that’s just not how IT should work.

The use of config and straight-forward mapping features within Seamless means that meeting the needs of changing business requirements doesn’t require re-engineering entire integrations every time a field, a data map or an option set changes.

Not every IT organisation is the same

This blog post was, in part, inspired by a conversation with a prospective client. In our initial conversation, the primary concern was about delivery timescales of the integration requirements. As the conversation progressed, it was also discussed that the broader procurement strategy was to use contractors to deliver software with a view to streamlining the size of the in-house IT team. That’s not uncommon, by their very nature, project teams are short-term creatures designed to achieve an outcome (rollout of new software).

In discussing why Seamless may be a better option than using contract developers to build a point-to-point solution, I stressed not only the delivery risk and timing considerations, but that the managed service remains in place after the project team is stood down. This enables the in-house IT team to focus their effort where they can deliver more “bang for buck” whilst the Seamless monitoring ensures that when issues occur, the IT team are quickly informed with meaningful information to trouble-shoot any underlying systems issues.

No-one likes downtime. But if it has to happen, at least make it happen elegantly

It’s hard to write a post which talks about any virtues associated with downtime but, in the wake of last night’s Azure UK outage, we had some small sense of satisfaction that Seamless behaved both as we designed and would’ve hoped. Connectivity to the UK South Azure data centre was lost between about 22:30 and 01:40.

 

 

Accepting that we weren’t going to fix Azure’s connectivity issues, where do we see the positives for Seamless?

 

1. Our alerting let us know it was happening (before Microsoft confirmed any issues)

The Seamless monitoring alerted us to the first connectivity alert at approx. 19:43. This seemed to be an isolated alert, however we received a second connectivity alert at 22:30 and they began to be raised regularly at that time. We then began our own investigation and the Azure service page confirmed a connectivity issue at approx. 23:30. Shortly after that, we issued our alert to the Seamless user community.

 

2. When Azure connectivity returned, Seamless shrugged its shoulders and carried on working

Bearing in mind that connectivity was the primary issue, the service itself actually continued running. As soon as connectivity was restored, data continued sync’ing with no intervention from us (or, of course, our customers). Any data that had not been collected at source would’ve then been collected and posted to target systems. Any data “in transit” when the connectivity issues started was processed and sync’ed to relevant target(s).

 

3. We had mitigation actions ready in case we needed to support customers

Our mitigation options are typically to redeploy the service and the config to other Azure locations (subject to relevant instructions from our customers bearing in mind potential data considerations). In the event that the UK South region was not restored by the start of the business day, we were in a position to relocate the service and client configurations to Western Europe (Dublin). This would’ve taken us less than an hour and no data would’ve been lost in this process.

As Microsoft restored the Azure service around 01:40, this was not necessary.

 

 

No-one likes downtime and we recognise that we’re lucky that these 3 hours were outside of core business hours for our current customer base. In practice, we continue to believe that a Microsoft data centre is more secure and robust than were we to build our own.  In this light, we architected the Seamless service to “fail elegantly” and although disappointed this had to be tested in anger, we’re delighted that it worked as we would’ve expected it to.

Business flexibility, celeb news, Lego modelling and IT architectures – you heard it here first

As much as it pains me to admit it, I fear that Gwyneth Paltrow and Chris Martin were onto something when they coined the term “conscious decoupling”. In the data integration space, the ability to break down a service into components is a significant enabler of flexibility. That’s good because technical flexibility translates to business flexibility. If the only constant is change, as we’re so often told, then flexibility in technical architectures is the way to prepare for change.Continue reading