KICIT & KSET Pvt Ltd, Nagpur India

  • Home
  • Quest Courses
    • All Courses
    • Comp. Sci. Foundations
    • .NET Foundations
    • Web Programming
    • Advanced .NET
    • Desktop Prog.
    • Other Courses
    • Purchase Online
    • Purchase Product Keys Online
    • Help
    • Quest Certification
    • Quest Dealers
  • Books
    • All Books
    • C Programming
    • Data Structures
    • C++ Programming
    • .NET Series
    • Embedded Systems
    • Other Books
    • Visual C++
    • Written Test Series
  • Courses@Nagpur
    • C Programming
    • C++ Programming
    • Embedded Systems
    • Java Programming
  • Start a QLC
  • Seminars
    • Seminar/Workshop
    • Corporate Training
    • Send Enquiry
  • About Us
    • Contact Us
    • Blogs
      • Asang Dani
      • Yashavant Kanetkar
Home » Blogs » asang's blog

Shared Memory, Caching and Virtual Memory - why do I care?

Submitted by asang on 7 December, 2006 - 07:50
Recently, I read a post on Sue Loh's WinCE Base Team Blog. That made me realize the rationale for using shared memory has changed with the advances in Computer Architecture. Traditionally, we are taught in our OS courses that if you have to share some data between two co-operating processes, there are two options:
  1. Copy the buffer from source to target process
  2. Create a shared memory area that both processes have access to and use that to share information.
What most people believe is that second approach is inherently better. After all, it prevents two copies of same data from being created. However, on closer look at modern processors, it turns out that this may not be always true. Most modern CPUs like ARM, MIPS, Hitachi SH4 and even Pentium-4 have 8-16 kBytes of "Instruction" and "Data" cache between CPU and main memory ( which typically is SRAM or DRAM ). The caches are used to hold most frequently used "Instruction"s and "Data" for executing tasks. To identify "what" is being cached, virtual addresses are used. All these processors are used in conjunction with operating systems like Windows, Linux and embedded Operating Systems like Windows CE. When two processes will "share" some data through shared memory, each will use it's own set of "Virtual Addresses" for that data. Since the "Level 1" cache uses "Virtual Address" as a tag, there is no way that "Virtual Addresses" corresponding to this "Shared memory" region be "cached", unless of course the region has same virtual address in both processes. If it is cached, the cache controller will not know that two different Virtual Addresses from two different processes refer to same area in memory. As a result these operating systems mark the shared memory region as "non cacheable" in "Page Table" of each of these processes. Since caching affects the access time for frequently used data significantly, it's clear that in case of shared memory access, such data access turns out to be significantly slower. The advantags of "sharing" may be easily offset of disadvantage of non-cacheability. To summarise, it's clear that one cannot make assumptions about performance based on theory alone. If you have performance critical systems, it is necessary for you to understand your Processor, Operating System and finally do some benchmarking to understand which approach works best in your case. One size fits all is certainly not going to be the case!
  • asang's blog
  • Login to post comments

AddThis

Copyright © 2014-2017 KSET & KICIT Pvt Ltd.