Using Latency to Evaluate Computer System Performance

α
Olawuyi J.O
Olawuyi J.O
σ
Olawuyi J.O.
Olawuyi J.O.
ρ
Fagbohunmi S.G.
Fagbohunmi S.G.
Ѡ
Olawuyi O.M.
Olawuyi O.M.
¥
Mgbole F.
Mgbole F.
α Abia State Polytechnic

Send Message

To: Author

Using Latency to Evaluate Computer System Performance

Article Fingerprint

ReserarchID

28L6C

Using Latency to Evaluate Computer System Performance Banner

AI TAKEAWAY

Connecting with the Eternal Ground
  • English
  • Afrikaans
  • Albanian
  • Amharic
  • Arabic
  • Armenian
  • Azerbaijani
  • Basque
  • Belarusian
  • Bengali
  • Bosnian
  • Bulgarian
  • Catalan
  • Cebuano
  • Chichewa
  • Chinese (Simplified)
  • Chinese (Traditional)
  • Corsican
  • Croatian
  • Czech
  • Danish
  • Dutch
  • Esperanto
  • Estonian
  • Filipino
  • Finnish
  • French
  • Frisian
  • Galician
  • Georgian
  • German
  • Greek
  • Gujarati
  • Haitian Creole
  • Hausa
  • Hawaiian
  • Hebrew
  • Hindi
  • Hmong
  • Hungarian
  • Icelandic
  • Igbo
  • Indonesian
  • Irish
  • Italian
  • Japanese
  • Javanese
  • Kannada
  • Kazakh
  • Khmer
  • Korean
  • Kurdish (Kurmanji)
  • Kyrgyz
  • Lao
  • Latin
  • Latvian
  • Lithuanian
  • Luxembourgish
  • Macedonian
  • Malagasy
  • Malay
  • Malayalam
  • Maltese
  • Maori
  • Marathi
  • Mongolian
  • Myanmar (Burmese)
  • Nepali
  • Norwegian
  • Pashto
  • Persian
  • Polish
  • Portuguese
  • Punjabi
  • Romanian
  • Russian
  • Samoan
  • Scots Gaelic
  • Serbian
  • Sesotho
  • Shona
  • Sindhi
  • Sinhala
  • Slovak
  • Slovenian
  • Somali
  • Spanish
  • Sundanese
  • Swahili
  • Swedish
  • Tajik
  • Tamil
  • Telugu
  • Thai
  • Turkish
  • Ukrainian
  • Urdu
  • Uzbek
  • Vietnamese
  • Welsh
  • Xhosa
  • Yiddish
  • Yoruba
  • Zulu

Abstract

Building high performance computer systems requires an understanding of the behaviour of systems and what makes them fast or slow. In addition to our file system performance analysis, we have a number of projects in measuring, evaluating, and understanding system performances. The conventional methodology for system performance measurement, which relies primarily on throughput-sensitive benchmarks and throughput metrics, has major limitations when analyzing the behaviour and performance of interactive workloads. The increasingly interactive character of personal computing demands new ways of measuring and analyzing system performance. In this paper, we present a combination of measurement techniques and benchmark methodologies that address these problems. We use some simple methods for making direct and precise measurements of event handling latency in the context of a realistic interactive application. We analyze how results from such measurements can be used to understand the detailed behaviour of latency-critical events. We demonstrate our techniques in an analysis of the performance of two releases of Windows 9x and Windows XP Professional. Our experience indicates that latency can be measured for a class of interactive workloads, providing a substantial improvement in the accuracy and detail of performance information over measurements based strictly on throughput.

References

14 Cites in Article
  1. Beenish Zia,Xiang Guo,Yong Tong Chua,Thomas Labno (1995). Medical Image Reconstruction Optimizations Using Unified Programming Model.
  2. Brian Bershad,Richard Draves,A Forin (1992). Using microbenchmarks to evaluate system performance.
  3. Ben Smith (1996). Ultrafast Ultrasparcs.
  4. J Bradley Chen,Yasuhiro Endo,Kee Chan,David Mazieres,Antonio Dias,Margo Seltzer,Michael Smith (1996). The Measured Performance of Personal Computer Operating Systems.
  5. David Jefferson,L Johnson,Donald Reifer (1995). Ada Compiler Validation Summary Report: Certificate Number 940902S1.11376. UNISYS Corporation IntegrAda for Windows NT, Version 1.0. Intel Deskside Server for Intel Pentium 60 MHz =>. Intel Deskside Server with Intel Pentium 60 MHz,.
  6. C Lindblad,D Tennenhouse (1996). The VuSystem: A Programming System for Compute-Intensive Multimedia.
  7. Larry Mcvoy (1996). Lmbench: Portable tools for performance analysis.
  8. Jeffrey Mogul (1992). SPECmarks are leading us astray.
  9. O' James,Scott Toole,David Nettles,Gifford (1993). Concurrent Compacting Garbage Collection.
  10. John Ousterhout (1991). Why Operating Systems Aren't Getting Faster As Fast As Hardware.
  11. Mark Shand (1992). Measuring Unix Kernel Performance with Reprogammable Hardware.
  12. Ben Shneiderman (1992). Designing the User Interface.
  13. Jeff Reilly (1995). SPEC Discusses the History and Reasoning behind SPEC 95.
  14. M Vannamee,B Catchings (1994). Reaching New Heights in Benchmark Testing.

Funding

No external funding was declared for this work.

Conflict of Interest

The authors declare no conflict of interest.

Ethical Approval

No ethics committee approval was required for this article type.

Data Availability

Not applicable for this article.

How to Cite This Article

Olawuyi J.O. 2015. \u201cUsing Latency to Evaluate Computer System Performance\u201d. Global Journal of Computer Science and Technology - G: Interdisciplinary GJCST-G Volume 14 (GJCST Volume 14 Issue G5): .

Download Citation

Journal Specifications

Crossref Journal DOI 10.17406/gjcst

Print ISSN 0975-4350

e-ISSN 0975-4172

Keywords
Version of record

v1.2

Issue date

February 5, 2015

Language
en
Experiance in AR

Explore published articles in an immersive Augmented Reality environment. Our platform converts research papers into interactive 3D books, allowing readers to view and interact with content using AR and VR compatible devices.

Read in 3D

Your published article is automatically converted into a realistic 3D book. Flip through pages and read research papers in a more engaging and interactive format.

Article Matrices
Total Views: 8696
Total Downloads: 2366
2026 Trends
Related Research

Published Article

Building high performance computer systems requires an understanding of the behaviour of systems and what makes them fast or slow. In addition to our file system performance analysis, we have a number of projects in measuring, evaluating, and understanding system performances. The conventional methodology for system performance measurement, which relies primarily on throughput-sensitive benchmarks and throughput metrics, has major limitations when analyzing the behaviour and performance of interactive workloads. The increasingly interactive character of personal computing demands new ways of measuring and analyzing system performance. In this paper, we present a combination of measurement techniques and benchmark methodologies that address these problems. We use some simple methods for making direct and precise measurements of event handling latency in the context of a realistic interactive application. We analyze how results from such measurements can be used to understand the detailed behaviour of latency-critical events. We demonstrate our techniques in an analysis of the performance of two releases of Windows 9x and Windows XP Professional. Our experience indicates that latency can be measured for a class of interactive workloads, providing a substantial improvement in the accuracy and detail of performance information over measurements based strictly on throughput.

Our website is actively being updated, and changes may occur frequently. Please clear your browser cache if needed. For feedback or error reporting, please email [email protected]

Request Access

Please fill out the form below to request access to this research paper. Your request will be reviewed by the editorial or author team.
X

Quote and Order Details

Contact Person

Invoice Address

Notes or Comments

This is the heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

High-quality academic research articles on global topics and journals.

Using Latency to Evaluate Computer System Performance

Olawuyi J.O.
Olawuyi J.O.
Fagbohunmi S.G.
Fagbohunmi S.G.
Olawuyi O.M.
Olawuyi O.M.
Mgbole F.
Mgbole F.

Research Journals