Previous post Big Data beyond the hype, introduced 3v’s – Volume, Velocity and Variety. In this post, let us look as why volume matters.
One size does not fit all
If you are a data architect or a data modeler, when you design a system or model a business process the first order of business would be understand the data-flow. Such an exercise would turn toward understanding data volume. Second aspect of that is, in cases where such models have been previously built, data architects and modelers will periodically check to see the ‘current applicability’ of their model(s) where data tends to (constantly) grow and change.
Is this important and if so why ? Yes and here is why.
A data model is not set on a stone
The assumption that once a business process has been modeled, it will stand time as businesses develop is wrong. Even when there are no changes to the model as is – that is the business expansion and process changes has not influenced one, except volume – data volume alone could dictate revisiting the model and in cases it will dictate changes as well. A fully functional data model might bring the business processes and system(s) to a grinding halt, if it does not address changes in data volume and misses out on volumetric’s.
Current state of data
Companies big and small rely more and more on data collection on more aspects of their business, more about their customers, more about their buying patterns. E-commerce and Mobile revolutions has made it possible for businesses to get a microscopic view of a transaction along with the profile of the buying or interested customer. Mixed with the influence of Social Networks, it is a data deluge.
More than ever, businesses are collecting information of all kinds – tags and identifiers, signals and readings collected from machine-parts, location coordinates from mobile devices, transactional data, customer demography, selling medium, buying patterns, geographical trends, effective promotions, cross-sell influences etc. That brings in a lot of data. That brings in a lot of stress on poorly designed systems and data models.
Advancements in Data Modeling
A data model reflects business, business processes and is constructed to efficiently manage data that is collected. Efficiency is measured when the business is able to get actionable intelligence out of it. Data architects are exploring, innovating and introducing news way to model information systems. As business grows, the data model evolves into ways in which it can address and manage the change and still serve the business analysts and data researchers of the organization. No longer is a modeler confined to a singular model or a structure – ER or others. Advancements in DBMS (Database Management System) also makes it possible to harness the power of underlying hardware – Client-Servers out of commodity hardware parts or Appliances which are built for a specific purpose.
Solutions after modeling
When the data volume grows into terabytes and petabytes, modeling demands newer approaches and solutions. As the legacy models while they solved and still solve the problems of the world, those problem statements were different. Expectations from such systems were different and mostly limited (in size). And so expansion (using such solutions) is limited too. Newer expectations are a different problem to solve. And hence they demand newer solutions.
And this is the case for Big Data solutions to deal with volume (one of the three V’s). We will see velocity and variety in detail and revisit volume to explore as how Big Data solutions deal with them all.
Let us continue to look at Big Data beyond the hype, by understanding two things – types of data that we deal with and need for change in the approach to deal with them.
Types of Data
We spend most of our time than ever, on our mobile devices – Mobile Phones, Tablets and traditions forms – Laptops, Desktops etc. Then you have other forms of electronic devices such as Nest, Internet Televisions etc. which are data driven and getting smarter by the day to give us a “integrated experience”. All of them are inter-connected consuming ton of information on the Internet – tons of different types of information. We use our phones with apps (or applications) to listen to songs, to catch up on movie trailers, to turn on our cars’ air conditioner before we get in and to even control heaters at home based on consumption and usage patterns.
Data from these can be classified broadly to have the following characteristics – Volume, Velocity and Variety or 3V’s
One could think about these and ask “Have we not been dealing with them already, what is new now?”
It is akin to the question “I have been using mobile phones for a long time, what is special and different about iPhone?”
When you have to deal with data which has the above characteristics, the approach that you have used so far, dealing with them in a small or restricted scale are no longer applicable or they need to be revisited. As volume grows, as variety of data flows in – approaches have to change as well.
Just as one would seek an iPhone for its rich user experience, big app ecosystem, powered by unparalleled technological innovation, a CIO of Data Manager would seek solutions that are driven by innovations in managing such data and to drive value of them. And that is the case for Big Data solutions. We will explore more on 3V’s in the next part.
It has been an interesting last few weeks, to discuss and brain storm on Big Data. Broadly these conversations were around the following
Let us be honest and agree that there is hype. Forrester as Gartner agrees with that. Big Data is a buzz word today. Also, the fact is Big Data matured amongst us, in a much rapid manner that it is hard to ignore. But it does not mean that it is without merit.
Before we decide to keep it or throw it away or to get carried away by the “buzz words” or “hype”, let us look at few common situations through the eyes of different persona’s.
If you are a CIO
If you head a Services Company
If you head the Data Engineering Department
I shall address these questions in my next post.
“Unlike MapReduce or Dyrad frameworks, it offers full general-purpose, dynamic task graph execution, which enables developers to implement algorithms using arbitrary task structures”
The CIEL Project and Skywriting looks very promising and simple to implement. Task Graph Execution is an important feature and it looks to me that it would be very flexible to implement that with CIEL Project.
Cannot wait to play with this. More on this to come.