Article Fingerprint
ReserarchID
CSTSDE108G8
With fast pace growth in technology, we are getting more options for making better and optimized systems. For handling huge amount of data, scalable resources are required. In order to move data for computation, measurable amount of time is taken by the systems. Here comes the technology of Hadoop, which works on distributed file system. In this, huge amount of data is stored in distributed manner for computation. Many racks save data in blocks with characteristic of fault tolerance, having at least three copies of a block. Map Reduce framework use to handle all computation and produce result. Jobtracker and Tasktracker work with MapReduce and processed current as well as historical data that’s cost is calculated in this paper.
Mayank Bhushan. 2014. \u201cCost based Model for Big Data Processing with Hadoop Architecture\u201d. Global Journal of Computer Science and Technology - C: Software & Data Engineering GJCST-C Volume 14 (GJCST Volume 14 Issue C2): .
Crossref Journal DOI 10.17406/gjcst
Print ISSN 0975-4350
e-ISSN 0975-4172
Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.
Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.
Total Score: 102
Country: India
Subject: Global Journal of Computer Science and Technology - C: Software & Data Engineering
Authors: Mayank Bhushan, Sumit Kumar Yadav (PhD/Dr. count: 0)
View Count (all-time): 266
Total Views (Real + Logic): 8833
Total Downloads (simulated): 2189
Publish Date: 2014 05, Thu
Monthly Totals (Real + Logic):
This paper attempted to assess the attitudes of students in
Advances in technology have created the potential for a new
Inclusion has become a priority on the global educational agenda,
With fast pace growth in technology, we are getting more options for making better and optimized systems. For handling huge amount of data, scalable resources are required. In order to move data for computation, measurable amount of time is taken by the systems. Here comes the technology of Hadoop, which works on distributed file system. In this, huge amount of data is stored in distributed manner for computation. Many racks save data in blocks with characteristic of fault tolerance, having at least three copies of a block. Map Reduce framework use to handle all computation and produce result. Jobtracker and Tasktracker work with MapReduce and processed current as well as historical data that’s cost is calculated in this paper.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.