Between them, Bill Isherwood and Chris Houghton have built bespoke integrations to practically all the major ERP systems in the retail, footwear and apparel market. Originally published in the WhichPLM Annual Review 2014, we’re giving our WhichERP readers a chance to read this exclusive article.
The ultimate objective for most businesses is a single consolidated and coordinated business system that integrates all activities and business processes from “catwalk to sidewalk”, and allows the efficient and effective use of business information throughout the organisation. In this scenario, there would be the fabled “Single Version of The Truth”, all information would be input once only and then, subject to strict control, made immediately available to all users across the enterprise and also to the organisation’s extended supply chain – vendors, factories, material suppliers, testing companies, customers, carriers, and more.
Increasingly, this end goal is being described with the catch-all term “digital transformation”, which reflects its status as an all-encompassing project that can be extremely difficult – but not impossible – to manage.
More than 40% of companies will have twenty or more systems to include in any audit of their current business systems environment, and when you consider the reams of data that can exist inside one system alone, you’ll begin to realise the true context within which a rosy-sounding “digital transformation” is expected to take place.
We are all familiar with the analogy of “silos of information” – disparate applications between which information must be passed and which are universally criticised due to the inherent duplication of entry and associated errors and delays that can cripple an organisation. Individual “silos” can include social media, trend services, CAD, CAM, labour systems, PLM, ERP, sourcing solutions, tracking systems, CRM, EPOS, e-commerce, warehouse management, business intelligence and a myriad of third party systems.
Each of these may employ differing computing technologies, platforms, user interfaces, reporting and workflow mechanisms. All of which leads us to the inevitable conclusion that integration is both a necessity and a dizzying prospect to consider. We have personally worked on a number of projects where the sheer quantity of systems in use makes mapping them a mind-boggling exercise – the connections between them becoming so numerous and intertwined that it starts to look more like a tangled ball of twine than anything resembling an organised systems environment.
For example, product records exist across the entire extended enterprise, as do their attributes:
- Costs and selling prices
- Bills of materials
- Manufacturing Instructions
And yet, this data should only be “integrated” when it reaches a pre-agreed status (for example BOM & cost approval), when it may then require input or default of additional information not generated in the originating “master” application. And the same applies to suppliers, customers, agreements, purchase orders, sales orders, shipments and so on – all need to be in-synch across multiple databases, and be available to manipulate from within any. This task is made doubly daunting because the appropriate master application for any particular data can differ by circumstance. Product data, for example, can be created first in ERP, or often in PDM. Customers may originate first in CRM, sometimes in ERP or PLM, or indeed elsewhere. And the integration between them may be triggered at different times for different organisations, as well as operating in multiple different directions.
Business requirements have changed fundamentally over the past few years, with an increasing rate of change in systems and processes; hence it is important that companies aren’t constrained by technology in their ability to react to market demands, change to maintain competitiveness – or even to survive. A company’s own changing business demands, the “shelf life” of business applications and the rise and fall of software suppliers results in integration requirements changing, too – shifting continuously as applications are added, reconfigured, upgraded or replaced. Roll-out of any change, but especially integrated software, is not a trivial job and requires significant time, experience, resource and careful management to undertake effectively.
With business processes subject to constant improvement – at an increasing rate – and components within the software stack changing continuously, that ball of tangled we described earlier might more accurately be represented as the proverbial “can of worms” – continuously writhing, defying any attempt to glean an accurate picture of their nature or number. The task and risk associated with managing configurations, applications and suppliers, and the implementation and ongoing refinement, improvement and management of them should not to be underestimated.
“Best in class” companies tend to have more complete integration, while “laggards” continue to use cut and paste or rekeying, with its associated delays, duplication, errors and inefficiencies.
The objective of software integration, as we mentioned in our introduction, is to get every software package in the extended supply chain to communicate up and down the line so effectively that it appears as a single, homogenised system. Unfortunately there isn’t an Esperanto-style lingua franca available for this process – this would require a huge collaborative effort between all suppliers and developers involved, as well as an agreement on who “owns” the data.
Given the complexities involved with managing the same sort of collaboration within a single organisation, we believe it’s safe to say this collaboration will never happen. Despite the proliferation of Electronic Data Interchange (EDI) standards, for instance, most integrations are still bespoke, requiring their own configuration and customisation. Furthermore, each business user is different in terms of their own processes and their unique combination of applications and stages of implementation. For example, whilst it may seem “obvious” that PLM is the owner-originator of all product related data, a client who has an existing ERP application but no PLM system may see things differently. For them, ERP has been the lead system for so long that it becomes nominated as the “owner” of all data without much consideration. There are no hard and fast rules in these cases, but in most instances, he decisions and timescales at least can be improved; business risks and costs reduced by enlisting a proven methodology, and scoping, justification, selection and planning undertaken prior to the project kick-off.
So, while that final destination of a unified and holistic business system isn’t something we would urge you to forget, the purpose of this article is to remind anyone undertaking an enterprise-level project like PLM that integration cannot be taken lightly or considered a minor element of the initiative.
Any business that does find themselves weighing up their options where integration is concerned, though, will have a number of proven options and delivery methods to consider:
- The most efficient (and therefore rarest) form of integration: Real Time integration occurs when committing a change within the “master” application automatically propagates across the Enterprise – no rekeying, no duplication, no delay, and no errors.
- In the case of multiple systems, a preferred approach can include the use of another stage, or what’s referred to as “middleware”. This involves transfer to an interim area, allowing for the addition of default values and missing information, and providing an opportunity to manage the introduction point, formal validation and acceptance. This additional stage can result in some data discrepancies and delay, but greatly simplifies the management of different roll-out and upgrade of systems.
- Many software suppliers provide limited import and export facilities for their systems, but that often isn’t enough. Suppliers or partners provide “certified” export and import routines, and APIs which use predefined input and output formats such as CSV or XML. Use of these rather than bespoke links is recommended as they are:
- Tested and proven in other client sites
- Supported by the supplier’s ongoing maintenance offering.
However, it is each company’s responsibility to check which APIs are available to suit their needs.
- Some Suppliers provide tools to generate client specific APIs to cater for gaps in their clients’ requirements. Use of these requires more technical competence and testing.
- Integration tools and Applications can be used to create bespoke integration and interfaces. Use of these requires considerable knowledge of the applications to minimise risk, and the services of an experienced third party are often (and rightly) sought when this method is considered.
- Many clients still use what we call “Microsoft Integration”. This is data sent as email attachments or via intranet (or worse by fax) across the extended enterprise for future use, or alternatively rekeyed. This is by far the easiest strategy to implement from the point of view of resourcing and expertise, but is subject to considerable delay, inefficiency, duplication and error. Even so, it is often the only way to send information to supply chain partners who cannot integrate their systems. As in EDI, it is often the biggest organisation in the relationship that sets the standards for data transfers and their subordinate partners often rely on print and re-entry.
Despite the name – which is as innocuous sounding as they come – integration really can be anything but simple or intuitive. If we take the example of customer orders and updates, these could potentially arise in a number of different sources across the extended business systems environment – ERP, CRM, EPOS, mobile applications – and could initially consist of nothing more than header details and product order lines. Satisfying this order might be considered a difficult task with just one application to consider, but this complexity becomes compounded when we need to keep several integrated application in sync.
The same principles apply to the front end of creative product development, too: all supporting solutions must be kept in sync to avoid errors and timing issues.
This requirement has been recognised and addressed – at least to some degree – by some of the major solution providers. They now offer complete solutions with built-in integration – a kind of one-stop offering. However, these “complete” offerings often disappoint when examined closely: integrated offerings are often only labelled as such, and in fact are the result of the acquisition of suppliers of add-on products or plug-in modules, or partnerships with third parties. Often these points of integration are incomplete or ineffective, and since the original developer of the acquired application may no longer be in business, there is typically little hope that the interface will be improved.
Here, as with all solution selection, the onus is firmly on the due diligence of the prospective customer. While a pre-defined and pre-packaged unified system might sound ideal, it is only through detailed and experienced questioning and challenging that the real situation can be exposed and evaluated. And more often than not, you will discover that there is still an integration initiative to run, despite the vendor’s initial promises.
In the long term, Cloud technologies may prove be the integrator’s answer to the perennial and thorny question of what integration approach to take. But there will not be a sustainable or complete option until data update and exchange standards are developed and employed. As technology develops we may achieve “one version of the truth” and the Cloud crowd may make extravagant claims – but similar claims have appeared many times before in the form of PDM, PLM, MRP, ERP, Open Systems, Object Orientation and a host of other platforms that initially promised more than they could deliver.
The Cloud certainly can offer significant benefits in terms of rapid system development, and almost certainly will seriously challenge traditional monolithic applications. But the ability to automatically integrate applications now (or in the near future) is something any prospective customer should challenge.
Just to add to your already voluminous list of concerns, there is also the challenge of system phasing or timing – where these activities fall in the lifecycle of an implementation project. Very few clients start from scratch and envision a “big bang” where all of their connected applications are implemented simultaneously, and integrated at the same time.
It is important that each company considers their entire extended organisation and the requirements of both their internal and external stakeholders – clearly establishing the “as-is” and the “to-be” situations to set priority and phasing.
Remember that can of worms we imagined so vividly earlier? New business partners, systems and upgrades will be regular occurrences, adding their own weight to the already convoluted systems environment we have worked so hard to untangle. User acceptance testing should be undertaken whenever software modules are upgraded or replaced, since simple changes can cause unpredictable results in other areas – particularly where those areas are have been seamlessly integrated.
Prior to go-live of any component, systems and their integration must be tested thoroughly and this should be completed and signed off by the client in accord with detailed scripted scenarios and a formal test plan. Developers cannot be relied on to undertake integrated or volume testing, and it would be a considerable business risk not to thoroughly test new installations with “real world” transactions and volumes. Even using standard integration tools and APIs, business applications are so complex and so configurable that every project really is different. It is impossible for the system or integration authors to test every permutation of setting, system switch, parameter and run-time option. Similarly, business applications should always be tested prior to roll-out of upgrades; with the added complexity of integration, this is even more critical. Companies should utilise a reporting and data analysis tool which can access information from all of the integrated systems rather than rely on different tools for each application.
And now it gets complicated! Running a number of integrated systems, keeping them in synch and managing issues is significantly more challenging than working with a single system in isolation. Our goal of a truly unified “single version of the truth” must be built upon a formal, documented, robust strategy that can cater for the inevitable failure or disaster – be that hardware, communications, business application, integration component, operating system failure, or something more routine caused by deliberate or accidental user error. Processes must be in place to:
- Alert all failures
- Establish which processes have updated which application (fully or partially)
- Determine whether to continue processing transactions or update other systems
- Catch up with the components or transactions which failed
- Resolve any resulting system incompatibilities or lack of integrity
- Identify serious issues to be addressed by domain experts
Strong contractual commitment and service levels need to be established between the various suppliers, and lines of responsibility and delimitation drawn, so that when things go wrong (and they often do) the team responsible – although not necessarily to blame – is identified and tasked to rectify the situation. Even if the cause of the failure can be quickly resolved – e.g. failure of power or a communications link in the middle of an update – then responsibility needs to be unambiguous. In an integrated environment, the answers will need to be available in the short window of time it takes for errors to be noticed: you must find out who owns the problem, and take the required steps to ensure the data is recovered and system integrity and status is restored.
Many companies these days are outsourcing their systems management, which makes the above especially critical when we consider that the party responsible may operate on another timezone or in another language.
Similarly, documentation and rehearsals of disaster recovery procedures should be an integral part of a company’s business strategy, yet very few companies actually rehearse at all, and rely instead on sheer effort and luck to carry the day.
As we hope this article has explained, integration is a complex and dynamic requirement – but one that is necessarily to achieving the digital transformation vision that so many modern organisations share.
The majority of companies require external assistance to separate the technology from the business process: third parties who bring sector specific experience, tools and techniques to enable the design and delivery of a solution specifically relevant for their specific needs – as they are now and as they may be the future.
Integration can be a can of worms, certainly – but some of us have handled more than one such can in our time.