Managing the change risk in technology improvements

Technology, on its own, is not the panacea for all problems. The creative vision of the individuals who design the architecture and write the computer programs determines the scope and usefulness of any technical solution. Often the remit of a new project is to address the immediate business needs and satisfy regulatory requirements. In general, it is a reaction to external stimuli. Of course, it is possible to design technical solutions that not only fulfill the current requirements, but are flexible to accommodate the evolving future needs of the business. However, such designs require an insightful domain expertise, in depth understanding of the technology and market foresight which is a rather tall order. Therefore, one often faces the challenges to constantly undertake new technology projects and/or upgrade existing infrastructure in the face of ever growing regulatory requirements (e.g. Solvency II, Basel 3, MIFID II, Dodd-Frank), technical changes (.NET, mobile apps, cloud computing) and competitive pressures. Rather than provide a prescriptive solution, I will discuss the issues regarding managing change in the context of 3 instances of projects that I have worked on. Hopefully, they will resonate with similar incidents that the reader may have encountered.

During 1998-2002, as a Ph.D. student, I worked on several industry funded projects that involved using mathematical modelling and computational optimisation in Supply Chain Planning and Management.  In one such project for a leading consumer goods manufacturer, we created a Decision Support System (DSS) to support the strategic and tactical decision making of its global supply chain. The strategic decisions include site locations, choices of production, packing and distribution lines, and the capacity increment or decrement policies.  We developed scenarios (along with their respective probabilities) to capture the uncertainty in consumer demand for the various products. The resulting stochastic model had about 0.5 million constraints and 1 million decision variables. As it was not possible to optimise the problem on a single desktop computer, we linked together a group of Personal Computers using a message passing library (PVM [1]) to create a virtual parallel computing environment. Using the concept of functional and data parallelisation were able to solve the problem within 4 hours [2]. Delighted at our achievement, we arranged a meeting to demonstrate this to the industry sponsor and began generating the release version of the individual components. However, two days before the meeting, the results produced by the DSS were inconsistent with the expectation. Obviously, there was chaos within the research group. Late at night, we realised that the new version of one of the embedded library routines had re-arranged the raw data prior to the mathematical optimisation. The individual responsible had failed to COMMUNICATE the new version to the group. Thus the library worked perfectly on the settings of his local machine, but when incorporated within the largersystem it caused problems.  Fortunately, we were able to revert back to the earlier version and the issue was resolved.

Communication amongst the stake-holders within the project is the fundamental element to successful change management. Given the nature of large projects, with the stakeholders having differing backgrounds, it is important to keep communication free of jargon and domain specific phrase.  Often it is forgotten that there are two-parties in any communication, the originator and the receptor(s). The fewer the layers are between these two counterparties, the lower the probability is of any misunderstandings.

As a university lecturer, I was entitled to a new computer at regular intervals. As my research involved modelling and optimisation of large-scale stochastic systems, I would often reach the limit of memory resources and/or processor speed. The migration from one computer to another was a painful process. Copying the data is straightforward but the individual programs and the systems that use these programs often have settings which were made in the heat-of the moment to get the thing working with the rationale conveniently forgotten. In the absence of any log corresponding to the changes, getting the essential programs and systems to work in a new environment was a nightmare situation, which resulted in the loss productivity over several days.

Most software engineers are taught the importance of DOCUMENTATION. However, many consider it an unnecessary distraction to the creative process. This attitude results in documents that are either an afterthought or patchy. Similarly, configuration changes to programs or systems are often not fully captured. This lack of clearly articulated reports of findings and changes causes tremendous loss of time during the reverse-engineering of changes. An additional layer of technology is not necessary for this task, the ubiquitous paper and pen should suffice to maintain the log, which is what I did.

In 2007, I left academia and joined one of the leading hedge funds in London. This hedge-fund, with its roots in fixed-income, morphed into a multi-strategy fund and has grown organically and exponentially over the years. The systematic trading business had a significant proportion of the asset under management. This systematic trading desk was driven by extremely numerical individuals (often referred to as quants) who would analyse data, identify the opportunities, develop models to back-test the hypothesis and thereafter release the model to the implementation team. The implementation team would develop and maintain the trading version of the software model. Each quant often worked independently on a given model, resulting in plethora of data versions. Moreover the procedure to access the data was not uniform resulting in unnecessary duplication of efforts. There was difficulty in defining a uniform process as each quant had their own preferred method of doing things. This constrained the ability to rapidly deploy the existing knowledge base and tools to a new market. For instance, if an existing model was trading in the CAC 40(French index) to deploy the same analytics to the Australian market would require repetition of the entire cycle rather than being a simple process of changing the source of the data.

This was an issue of DISCIPLINE within the team to adhere to a uniform process. Moreover, as the growth was organic, the primary focus was on generating business, without much of a focus on imposing common guidelines. However, once the organisations expand beyond a threshold size, a process needs to be defined. The solution used by the hedge fund was to have a dedicated data management team that would be the central repository of data and procedure for the storage and access. 

In summary, challenges due to the evolving business needs and changing technology require constant attention.  A pro-active response requires 

a. well-defined process free of technical jargon for  unambiguous communication between the alternate stakeholders,

b. documentation of critical processes to minimise the loss in productivity during migration activity to new technology, and

c. discipline to adhere to a uniform process for seamless scaling and organisation wide adoption of new technology.

1.  PVM: Parallel Virtual Machine A Users' Guide and Tutorial for Networked Parallel Computing ,http://www.csm.ornl.gov/pvm/

2 Computational solution of capacity planning models under uncertainty, Parallel Computing, Volume 26, Issue 5,March 2000, pages 511-538.