Home WINDOWS HOW-TO Big data 3 versus – concepts and models

Big data 3 versus – concepts and models

Big data 3 versus – concepts and models


The term “data” is not new to us. This is one of the main things taught when you are choosing information technology and computers. If you remember, data is considered a raw form of information. Although for ten years the term Big Data it’s a hype these days. As the term suggests, a huge amount of data is big data, and it can be processed in different ways, using different methods and tools to get the information you need. This article explores big data concepts using three components mentioned by Doug Laney, a data warehouse pioneer who is believed to have pioneered the field Infonomics (Information economy).

Before you continue, you can read our articles on Big Data Basics and the Use of Big Data to get the gist. They can add to this post for further explaining big data concepts.

Big data 3 versus

Data in huge form, accumulated in different ways, was previously properly stored in different databases and after a while was dumped. When the concept emerged that the more data there is, the easier it is to find different and relevant information using the right tools, companies began storing data for longer periods of time. It’s like adding new storage devices or using the cloud to store data in the form it came in: documents, spreadsheets, databases, HTML, etc. Then it is organized into proper formats with tools capable of handling huge chunks of data. Data.

NOTE: Big data is not limited to the data you collect and store on your premises and in the cloud. It may include data from other sources, including but not limited to those in the public domain.

The 3D big data model is based on the following Vs:

  1. Scope: refers to storage management.
  2. Speed: refers to the processing speed of the data.
  3. Diversity: refers to the grouping of data from different seemingly unrelated datasets.

The following paragraphs explain big data modeling by detailing each dimension (each V).

A]Big data volume

When talking about big data, you can think of volume as a huge collection of raw information. While this is true, this also applies to storage costs. Critical data can be stored both on-premises and in the cloud, the latter being flexible. But do you need to store and all that?

According to an official document released by the Meta Group, when the amount of data increases, parts of the data start to seem unnecessary. In addition, it states that only the amount of data that businesses intend to use should be retained. Other data can be discarded, or, if businesses are reluctant to discard “seemingly unimportant data,” it can be dumped onto unused computing devices and even tape so that businesses do not have to pay to store such data.

I used “presumably unimportant data” because I also believe that any type of data may be needed by any business in the future – sooner or later – and therefore needs to be stored for a long time before you know that data is really unimportant. Personally, I upload old data to hard drives from yesteryear, and sometimes to DVD. Primary computers and cloud storage contain data that I consider important and know what I will use. Among this data is also one-time use data, which may end up on an old hard drive several years later. The above example is for your understanding only. This does not fit the description of big data because there is much less of it compared to what enterprises perceive as big data.

B]Speed ​​in Big Data

Processing speed is an important factor when it comes to the concept of big data. There are many websites, especially e-commerce websites. Google has already recognized that page load speed is important for better rankings. In addition to the rating, the speed also provides comfort for users while shopping. The same applies to data processed for other information.

When it comes to speed, it’s important to know that it goes beyond just higher bandwidth. It combines easy-to-use data with a variety of analysis tools. Data ready to use means some homework to create data structures that are easy to process. The next dimension – Diversity – sheds additional light on this.

C]Diversity of Big Data

When there is a huge amount of data, it becomes important to organize it in such a way that analysis tools can easily process the data. There are also tools for organizing data. When stored, data can be unstructured and of any form. It is up to you to decide how it relates to other data to you. Once you figure out the relationship, you can select the appropriate tools and transform the data into the desired form for structured and sorted storage.


In other words, a 3D big data model is based on three dimensions: the USEFUL data you have; correct data labeling; and faster processing. If you take care of these three, your data can be easily processed or analyzed to figure out what you want.

The above explains both concepts and 3D big data model. The articles referenced in the second paragraph will provide additional support if you are new to the concept.

If you want to add something, please comment.


Source link



Please enter your comment!
Please enter your name here