Why don’t we just get the dev team to build it?

When we’re discussing integrations with prospective clients, one question that seems to come up every so often is “We’ve got a dev team as part of the project, why don’t they just build it?” It’s not an unreasonable question, if there’s no compelling argument for using an integration service then why bother with one?


Getting to grips with integration development

Having some insight into how an integration works will provide some useful context. In simple terms, a straightforward integration follows roughly these steps:

  1. Get specific data from the data source (i.e. construct the relevant queries)
  2. Assess whether that data has changed since the last time we got it
  3. Decide whether the data needs to be posted to the target system
  4. Encrypt the data
  5. Compress the data
  6. Move the data from source to target
  7. Decompress the data
  8. Decrypt the data
  9. Transform the data
  10. Write the data to target

I should add that those steps are of differing levels of complexity in different scenarios. At a simple level, some systems can provide data which has changed since we last saw it, some systems cannot and the integration needs to perform the compare. At a more complex level, some systems have sophisticated APIs which allow us to construct queries “on the fly” using configurable options, some systems require us to hard code the queries before we can execute them.

In the cold light of day, most of these steps are generic:

  • Step 1 is specific to the source system but not necessarily to your business need, that is to say – “Dear system, please tell me about new records created in the past 30 seconds” is a generic request of whatever system you’re using. It is not specific to your business need. If the source is bespoke, then naturally it is a wholly bespoke step.
  • Step 2 is typically generic, but will be driven by business need.
  • Step 3 is a business decision largely informed by the outcome of Step 2.
  • Steps 4 through 8 are generic for any (well designed) integration.
  • Step 9 is specific to your setup in the target system. The data transformation will need to reflect the data structures in your target system, including both the overall data structure (an account in the source may be called a company in the target) and potential dropdown list values (Mister maps to Mr, status set to “Active” maps to status set to “Live”, etc.).
  • Step 10 is specific to the target system in the same way that step 1 is specific to the source system.

Steps 1 and 10 are specific to the “technical scenario” (i.e. the systems in play) and steps 3 and 7 are where your business requirements are going to drive how the integration should work. The rest of it is pretty much the same, whether you’re connecting your CRM to your helpdesk or your accounting package to a front-end e-commerce system.


Paying to reinvent the wheel

With that in mind, let’s return to the question: “We’ve got a dev team as part of the project, why don’t they just build it?” The short answer is “Because the vast bulk of their development effort would be spent on generic integration features which are already accessible to you at a much lesser cost than building them yourself.”

Naturally, there is more complexity to the answer. In my experience, point-to-point integrations are usually developed to a “minimum functioning feature set”, i.e. they can get the right data from A to B on day one. That’s not necessarily a terrible place to be, but it is not as good as it could be.

A service built first and foremost as an integration service will offer sophisticated features which your project budget is unlikely to underwrite for a single point-to-point integration. Encryption, compression, elegant management of downtime at the source or target, ability to be easily extended, etc. All of these features need to be designed and built.

Additionally, developers are not “just developers”. That’s like saying builders are just builders. The technical skills required to marshal data, overcome infrastructure and connectivity complexity, manage reporting, etc. are quite different to the skills required to configure / build your new application. Sure, there’s some overlap. Most bricklayers can do some paintwork, but for painting decorative ceiling cornices, I’d rather get a painter.


The big picture

In a 2015 paper on digital transformation projects, Gartner noted that “many organizations already favor a new kind of “build” that does not include out-of-the-box solutions, but instead is a combination of application components that are differentiated, innovative and not standard software or software with professional services (for customization and integration requirements), or solutions that are increasingly sourced from startups, disrupters or specialized local providers.”

We’re happy to be one of those startups, even if it means using American spellings on our blog post.


No-one likes downtime. But if it has to happen, at least make it happen elegantly

It’s hard to write a post which talks about any virtues associated with downtime but, in the wake of last night’s Azure UK outage, we had some small sense of satisfaction that Seamless behaved both as we designed and would’ve hoped. Connectivity to the UK South Azure data centre was lost between about 22:30 and 01:40.



Accepting that we weren’t going to fix Azure’s connectivity issues, where do we see the positives for Seamless?


1. Our alerting let us know it was happening (before Microsoft confirmed any issues)

The Seamless monitoring alerted us to the first connectivity alert at approx. 19:43. This seemed to be an isolated alert, however we received a second connectivity alert at 22:30 and they began to be raised regularly at that time. We then began our own investigation and the Azure service page confirmed a connectivity issue at approx. 23:30. Shortly after that, we issued our alert to the Seamless user community.


2. When Azure connectivity returned, Seamless shrugged its shoulders and carried on working

Bearing in mind that connectivity was the primary issue, the service itself actually continued running. As soon as connectivity was restored, data continued sync’ing with no intervention from us (or, of course, our customers). Any data that had not been collected at source would’ve then been collected and posted to target systems. Any data “in transit” when the connectivity issues started was processed and sync’ed to relevant target(s).


3. We had mitigation actions ready in case we needed to support customers

Our mitigation options are typically to redeploy the service and the config to other Azure locations (subject to relevant instructions from our customers bearing in mind potential data considerations). In the event that the UK South region was not restored by the start of the business day, we were in a position to relocate the service and client configurations to Western Europe (Dublin). This would’ve taken us less than an hour and no data would’ve been lost in this process.

As Microsoft restored the Azure service around 01:40, this was not necessary.



No-one likes downtime and we recognise that we’re lucky that these 3 hours were outside of core business hours for our current customer base. In practice, we continue to believe that a Microsoft data centre is more secure and robust than were we to build our own.  In this light, we architected the Seamless service to “fail elegantly” and although disappointed this had to be tested in anger, we’re delighted that it worked as we would’ve expected it to.

The counter-intuitive benefit of shorter projects

As we’re working to build the partner network around Seamless, I am often talking through (what we believe to be) the benefits of using Seamless for delivery of integrations. We have a background in delivering IT change projects and one of our favourite Seamless benefits is that developing integrations becomes much much quicker. If we have a connector plugin already built, it is simply a matter of specifying the right fields and associated rules. If we don’t, we can often build one in under a week (assuming being reasonable API access).

For companies who make a lot of their revenue selling time (i.e. time taken by software engineers to build applications), it seems counter-intuitive that we’re touting shorter projects as a benefit. Er, what’s up with that?


Our response is simple: I’d rather you sold a 60 day project than lost a 120 day project to a competitor.

Our partners will be competing with any number of other organisations who will have different ways to deliver software. Today’s average user has experience of sophisticated web services and expects that software delivery companies can quickly connect up different services. Agile approaches add an additional dimension of expectation because clients expect to see phased delivery of packages of requirements.

Sure, reducing delivery time reduces upfront professional services revenue, but it makes our partners’ projects more commercially attractive. If this can be done in a way that also reduces risk (using technology that is proven to work) and increases flexibility (extendable plugins, a myriad of configuration options, etc.), then the offer becomes more attractive on day one (commercially) and over time (ability to adapt to inevitable business change).

Finally, let’s not lose sight of that fact that, over time, “lost” revenue may be recovered (plus more) through sharing of the subscription fee.



[icon name=”exchange” class=”” unprefixed_class=””]  Learn more about partnering with Seamless. We could talk about it all day.

Rolling out the big guns – SQL Server, MySQL and Oracle

The initial development of Seamless focused on ensuring we could support clients and opportunities that we were, at the time, working on. That quickly resulted in connector plugins for Microsoft Dynamics, Freshdesk, Autotask, Hoopla and SharePoint lists. As these have been stable and in use for a while now, this has given us the chance to focus on building connector plugins for more popular enterprise-grade tools.

That left us with an interesting choice … what next? We decided we’d focus on a handful of widely used applications which are deployed in thousands of different ways at organisations across the globe: Microsoft SQL Server, MySQL and Oracle. We’re delighted to announce these are now built, tested and hungry for data.


Seamless wall of fame May 2017


These plugins are all configurable, offering the best mix of rapid deployment and ability to control and extend the plugin’s work to meet the requirement at hand.

  • Configure mappings (i.e. fields to integrate) using XML to identify relevant tables and fields
  • Extend the feature set of these plugins using .Net to code extensions which are deployed on top of the existing plugins (i.e. you are not constrained by the limitations of any given plugin)


[icon name=”phone” class=”” unprefixed_class=””]  If you have an integration need for any of these technologies, please reach out and discuss with us how we can address this, quickly and affordably.

[icon name=”exchange” class=”” unprefixed_class=””]  More information on Seamless can be found here.

Business flexibility, celeb news, Lego modelling and IT architectures – you heard it here first

As much as it pains me to admit it, I fear that Gwyneth Paltrow and Chris Martin were onto something when they coined the term “conscious decoupling”. In the data integration space, the ability to break down a service into components is a significant enabler of flexibility. That’s good because technical flexibility translates to business flexibility. If the only constant is change, as we’re so often told, then flexibility in technical architectures is the way to prepare for change.Continue reading