Insurance Information Bureau of India was promoted in year 2009 by IRDA, with the participation of stakeholders of the insurance sector, with the objective of supporting the insurance industry with sector-level data to enable data-based and scientific decision making including pricing and framing of business strategies. The Bureau has in its brief period of existence generated insightful reports, both periodic and one-time, for the benefit of the industry.
IIB handles the Central Index Server which acts as a nodal point between different Insurance Repositories and helps in de-duplication of demat accounts at the stage of creation of a new account. The Central Index Server also acts as an exchange for transmission/routing of information pertaining to transactions on each policy between an insurer and the insurance repository.
Due to the ease of use and simplification of Work Process, there has been a drastic computerization of all business processes. As a result, all processes are quicker and all data are getting archived carefully for future use. But, what is the significance of storing large amount of data if there cannot be anything useful generated out of it.
In IIB, they had several tons of insurance user’s data getting accumulated from time to time. Their requirement was not only to make use of the large amount of data for analysis, but to derive the mortality and morbidity rate with those data. Collecting, accumulating, cleansing, processing and analyzing were their challenge. But doing this for all the 40 crores of records? So that was the BIG challenge!
Following a considerable number of requirement gathering sessions and understanding how diversely spread were their data, we drew a solid road map to solution IIB. We started off by systematically organizing all the data from several sources. The next action plan in our list was to conclude a way to cleanse and map the huge amount of raw data. Following that was our analysis on how to logically churn the user data to bring the mortality and morbidity rates.
Our BIG solution was ready to go. It could robustly grab large amount of data from the sources, then map them into processable information on the fly. Once these information were fed into our algorithm, in a quick span of time it could efficiently bring the Mortality and the Morbidity rates for any huge volumes of input data flawlessly. Now, we are proud solution providers to IIB helping them to process large volumes of raw data and converting them into useful, readable and insightful information.