When new data sources are both your business…and your biggest burden

Companies handle the sensitive data from clients (often financial institutions) for various reasons, such as analyzing and providing insights about clients for their consumption; and generating comprehensive summaries of transactions. Very often the challenges of processing data leads to fractured, multi-step and manual processing.

Manual processing has several potential disadvantages:

  • It is cumbersome and error prone
  • It often leads to inappropriate access to sensitive information
  • Inherent potential for delay due to arrival of file in non-working hours

Inefficient manual processes lead to delayed on-boarding of data files and insufficient provision of services that depend upon up to date data. Moreover, these manual processes can be very expensive to provision and maintain. More efficient business processes would move towards automated pick-up and processing of customer files in different file formats, e.g. delimited text, fixed-width text, different excel formats, etc. This initial automation approach resolves the issue of manual processing, but suffers when it comes to expandability and often fails to deliver the efficiency expected when the business case was made at the get-go.

Such a solution would mean that new development would be required for every new customer or file type. Though much of the work effort is repeatable, each solution instance would need to be treated as a separate implementation. The developed solution would need to go through all of the stages of a software development life cycle before reaching production. This would likely result in a dependency on a delivery resource (either from a consulting firm or a skilled in-house capability) on a long term basis. These realities have led to the persistence of manual processing, even in large scale enterprises, where this a reluctance to move from tried and tested, but inefficient, processing.

When considering the advantages and pit falls above, some enterprises have progressed to solutions that resolve the key challenges of automation, adopting generic, self-service solutions. Such solutions enable companies to go through minimal or no new development for processing new clients and file types, enabling business users to add new clients with limited or no interaction with expensive technical resources, increasing speed and saving money on operating costs. The key to such solutions is consideration of growth possibilities as part of the foundation design, allowing mechanisms to on-board new clients, data and file types through streamlined and user friendly processes. When implemented successfully there is a very significant net saving in time, as represented in the two graphs below based on our experiences with a number of clients across multiple industries:


Although the amount of time taken for processing a file through generic self-service solution is slightly greater than that taken by an automated solution, generic self-service enables companies to start processing the files a lot quicker than other two methods. Additionally, since the code is embedded in one place, maintenance, understanding and traceability of code becomes easy. The net benefits are quickly apparent, no more than when a new contract is signed and provision of services is demonstrated to be achieved far quicker than would have been possible under previous approaches.

We at Eccella, have successfully implemented generic self-service and expandable solutions for a number of clients. We possess the skills and methodology to design and deliver such solutions to enterprises, whose growth is powered by the data of their clients and is often dependent on its turn-around time to start processing new client files in production and the agility to start delivering value. If you’re interested in our help in relieving the burden of new data sources, please contact me via LinkedIn and check out our website at

Sarvesh Kashyap, Senior Consultant at Eccella Corporation