
The history of Big Data can be briefly explained as follows:
In 2005 “big data technology” was used but there was no naming for the technology, finally the technology was given a name by Roger Mougalas named “big data”, big data he meant was a collection of big data. large data that at that time was almost impossible to manage and process by traditional business intelligence tools at that time, in the same year the big data platform Hadoop that can handle Big Data was launched, hadoop was created based on an open source software framework called Nutch and at combine it with Google’s MapReduce.
But in order to better understand the origins of big data in more depth Timeline A complete History of Big Data:
Actually the earliest record of using data to track and control business dates back to 7000 years ago when accounting was introduced in Mesopotamia to record the growth of crops and livestock. Accounting knowledge continues to improve,
The history of big data began in 1663. The
history of big data began in 1663. John Graunt recorded and examined all information about the causes of death in London. John wanted to gain understanding and build a warning system for the bubonic plague that was going on at the time.
In the first recorded record analysis of statistical data, he compiled his findings in the book Natural and Political Observations Made on the Bills of Mortality, which provides great insight into the causes of death in the seventeenth century. Because of his work, John Graunt can be considered the father of statistics.
Era of 1887
In 1887 Herman Hollerith invented a computing machine to be able to read holes made in paper cards to organize census data.
The History of Big Data in 1937
The first sizable data project was created in 1937 by the Franklin D. Roosevelt Administration in the United States. The project came about after the Social Security Act became law in 1937, as a result of which the government had to track the contributions of 26 million Americans and more than 3 million employers. Therefore, IBM was trusted to complete this big project by making a hole punch card machine.
The era of 1943
The first data processing machine appeared in 1943 which was developed by the British to decode the Nazi soldiers during World War 2. This device was named Colossus which served to look for patterns in messages intercepted by the British. This device can read 5000 characters per second which can reduce processing time that previously took weeks to just hours.
The Era of 1952
America’s National Security Agency (NSA) was formed in 1952 and in more than 10 years, they have contracted 12,000 cryptographers because the NSA was faced with huge amounts of data during the cold war.
The Era of 1965
In 1965 the United States Government decided to build the first data center to store more than 742 million tax returns and 175 million sets of fingerprints by transferring all these records to magnetic computer tape which had to be stored in one location. It was not long before the project was discontinued, but it is generally accepted that it was the beginning of the era of electronic data storage.
The history of big data is quite rapid in 1989 The
history of big data is quite developed in 1989 because this year the British computer scientist Tim Berners-Lee finally created the World Wide Web. they aim to facilitate the process of sharing information using a ‘hypertext’ system.
Era of 1995
In 1995 there was a lot of data in the world because more and more devices were connected to the internet, be it IoT devices or in the form of PCs. That same year the first supercomputer was built capable of doing more work in one second than a calculator operated by one person in 30,000 years.
The Era of 2005
In 2005 Roger Mougalas of O’Reilly Media coined the term Big Data for the first time, this Big Data refers to large data sets that are nearly impossible to manage and process using traditional business intelligence tools.
2005 was also the year that Hadoop was created by Yahoo! built on top of Google MapReduce. Its purpose is to index the entire World Wide Web and today open-source Hadoop is used by many organizations to process large amounts of data.
The emergence of the world’s largest biometric database in the history of big data in 2009
In 2009 the Indian government decided to perform iris scans, fingerprints and photos of all 1.2 billion people. All this data is stored in the world’s largest biometric database.
2010 Era Developments
In 2010 Eric Schmidt spoke at the Techonomy conference in Lake Tahoe in California and he stated that “there were 5 exabytes of information created by the entire world between the dawn of civilization and 2003.”
Developments in 2011
In 2011 the McKinsey report on Big Data: “The next frontier for innovation, competition, and productivity”, stated that in 2018 the United States will face a shortage of 140,000 – 190,000 data scientists as well as 1.5 million data managers.
The same year Facebook launched the Open Compute Project to share specifications for energy efficient Data Centers.
Big data platform development in 2013
Docker launched as Open source OS Container Software
Data center development for Big Data in 2015
Google and Microsoft lead massive data center development
The development of cloud storage in China in 2017
Huawei and Tencent join Alibaba to build a Data Center in China
2018 Developments
The market leader in the data center world uses a 400G network where this network can transfer as fast as 400 Gigabytes per second.
Edge computing was born in 2020
Edge Computing is starting to emerge and will change the role of the “cloud” in the main sectors of the economy
Super fast transfer rate in 2021 The
data center uses a 1000G network where this network can transfer as fast as 1000 Gigabytes per second
Edge computing In 2025
Data centers will increase and are located closer to devices to accommodate edge computing needs.