Cost based Model for Big Data Processing with Hadoop Architecture

1
Mayank Bhushan
Mayank Bhushan
2
Sumit Kumar Yadav
Sumit Kumar Yadav

Send Message

To: Author

GJCST Volume 14 Issue C2

Article Fingerprint

ReserarchID

CSTSDE108G8

Cost based Model for Big Data Processing with Hadoop Architecture Banner
  • English
  • Afrikaans
  • Albanian
  • Amharic
  • Arabic
  • Armenian
  • Azerbaijani
  • Basque
  • Belarusian
  • Bengali
  • Bosnian
  • Bulgarian
  • Catalan
  • Cebuano
  • Chichewa
  • Chinese (Simplified)
  • Chinese (Traditional)
  • Corsican
  • Croatian
  • Czech
  • Danish
  • Dutch
  • Esperanto
  • Estonian
  • Filipino
  • Finnish
  • French
  • Frisian
  • Galician
  • Georgian
  • German
  • Greek
  • Gujarati
  • Haitian Creole
  • Hausa
  • Hawaiian
  • Hebrew
  • Hindi
  • Hmong
  • Hungarian
  • Icelandic
  • Igbo
  • Indonesian
  • Irish
  • Italian
  • Japanese
  • Javanese
  • Kannada
  • Kazakh
  • Khmer
  • Korean
  • Kurdish (Kurmanji)
  • Kyrgyz
  • Lao
  • Latin
  • Latvian
  • Lithuanian
  • Luxembourgish
  • Macedonian
  • Malagasy
  • Malay
  • Malayalam
  • Maltese
  • Maori
  • Marathi
  • Mongolian
  • Myanmar (Burmese)
  • Nepali
  • Norwegian
  • Pashto
  • Persian
  • Polish
  • Portuguese
  • Punjabi
  • Romanian
  • Russian
  • Samoan
  • Scots Gaelic
  • Serbian
  • Sesotho
  • Shona
  • Sindhi
  • Sinhala
  • Slovak
  • Slovenian
  • Somali
  • Spanish
  • Sundanese
  • Swahili
  • Swedish
  • Tajik
  • Tamil
  • Telugu
  • Thai
  • Turkish
  • Ukrainian
  • Urdu
  • Uzbek
  • Vietnamese
  • Welsh
  • Xhosa
  • Yiddish
  • Yoruba
  • Zulu

With fast pace growth in technology, we are getting more options for making better and optimized systems. For handling huge amount of data, scalable resources are required. In order to move data for computation, measurable amount of time is taken by the systems. Here comes the technology of Hadoop, which works on distributed file system. In this, huge amount of data is stored in distributed manner for computation. Many racks save data in blocks with characteristic of fault tolerance, having at least three copies of a block. Map Reduce framework use to handle all computation and produce result. Jobtracker and Tasktracker work with MapReduce and processed current as well as historical data that’s cost is calculated in this paper.

16 Cites in Articles

References

  1. Peter Henderson (1981). Functional programming: Application and implementation.
  2. John Hughes Why Functional Programming Matters.
  3. Liu Liu,Jiangtao Yin,Lixin Gao Efficient Social Network Data Query Processing on MapReduce.
  4. Open Graph.
  5. (1997). AAFP Home Page http://www. abfp.com/aafp.
  6. Linquan Zhang,Chuan Wu,Zongpeng Li,Chuanxiong Guo,Minghua Chen,C Francis,Lau (2013). Moving Big Data to The Cloud.
  7. P Mika,G Tummarello (2008). Web Semantics in the Clouds.
  8. Shengsheng Huang,Jie Huang,Jinquan Dai,Tao Xie,Bo Huang (2010). The HiBench benchmark suite: Characterization of the MapReduce-based data analysis.
  9. P Costa,A Donnelly,A Rowtron,G Shea Camdoop: Exploiting in-network Aggregation for Big Data application.
  10. Yanpei Chen,Archana Ganapathi,Rean Griffith,Randy Katz (2011). The Case for Evaluating MapReduce Performance Using Workload Suites.
  11. A Pavlo,E Paulson,A Rasin,D Abadi,D Dewitt,S Madden,M Stonebraker A Comparison of Approaches to Large-Scale Data Analysis.
  12. M Stonebraker,D Abadi,D Dewitt,S Madden,E Paulson,A Pavlo,A Rasin (2010). MapReduce and Parallel dbmss:Friends or Foes?.
  13. Dawei Jiang,Beng Ooi,Lei Shi,Sai Wu (2010). The performance of MapReduce.
  14. Yi Yuan,Haiyang Wang,Dan Wang,Jiangchuan Liu (2013). On Interference-aware Provisioning for Cloudbased Big Dat Processing.
  15. S Ghemawat,H Gobioff,S Leung The Google File System.
  16. Changquing Ji,Yu Li,Wenming Qiu,Uchechukwu Awada,Kequie Li (2012). Big Data Processing in Cloud Computing Environments.

Funding

No external funding was declared for this work.

Conflict of Interest

The authors declare no conflict of interest.

Ethical Approval

No ethics committee approval was required for this article type.

Data Availability

Not applicable for this article.

Mayank Bhushan. 2014. \u201cCost based Model for Big Data Processing with Hadoop Architecture\u201d. Global Journal of Computer Science and Technology - C: Software & Data Engineering GJCST-C Volume 14 (GJCST Volume 14 Issue C2): .

Download Citation

Issue Cover
GJCST Volume 14 Issue C2
Pg. 13- 17
Journal Specifications

Crossref Journal DOI 10.17406/gjcst

Print ISSN 0975-4350

e-ISSN 0975-4172

Classification
Not Found
Version of record

v1.2

Issue date

May 15, 2014

Language

English

Experiance in AR

The methods for personal identification and authentication are no exception.

Read in 3D

The methods for personal identification and authentication are no exception.

Article Matrices
Total Views: 8774
Total Downloads: 2323
2026 Trends
Research Identity (RIN)
Related Research

Published Article

With fast pace growth in technology, we are getting more options for making better and optimized systems. For handling huge amount of data, scalable resources are required. In order to move data for computation, measurable amount of time is taken by the systems. Here comes the technology of Hadoop, which works on distributed file system. In this, huge amount of data is stored in distributed manner for computation. Many racks save data in blocks with characteristic of fault tolerance, having at least three copies of a block. Map Reduce framework use to handle all computation and produce result. Jobtracker and Tasktracker work with MapReduce and processed current as well as historical data that’s cost is calculated in this paper.

Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]
×

This Page is Under Development

We are currently updating this article page for a better experience.

Request Access

Please fill out the form below to request access to this research paper. Your request will be reviewed by the editorial or author team.
X

Quote and Order Details

Contact Person

Invoice Address

Notes or Comments

This is the heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

High-quality academic research articles on global topics and journals.

Cost based Model for Big Data Processing with Hadoop Architecture

Mayank Bhushan
Mayank Bhushan
Sumit Kumar Yadav
Sumit Kumar Yadav

Research Journals