Tiering Up Without Tearing Up

-written by Pat Kelsey, Dataram Storage Blog Team Member

For about as long as there has been storage, there have been different storage technologies on the scene at any given time, distinguished from one another primarily by price, performance, and reliability. Traditionally, integrating these multiple tiers of storage into the SAN was something seen mainly in the large enterprise, where the operational scale and performance requirements justified the attendant cost and complexity.

In the SMB/SME environment, the combination of growing active datasets and the attractive price/performance point of flash memory motivates adding a third tier to the typical two-tier storage architecture (of rotating media fronted by volatile and non-volatile cache RAM on the array controller). With new memory technologies currently a very active area of research and development, it’s easy to imagine a fourth tier reaching commercialization in the next 5-10 years.

Integrating multiple tiers into a storage appliance can offer much higher performance and greater management simplicity than dealing with separate technological islands in the SAN (or forgoing the additional tiers completely). The key is to strike the right balance between automated data management and application-specific migration policies – some users may be served well-enough by generic tiering algorithms, but others (those using filesystems with separable metadata and those dealing with database transaction logs come to mind) will want to pin certain data to specific tiers in the hierarchy to obtain maximum performance.

Posted in Storage Posts | Comments Off

What Are Memory Ranks, Why Do We Have Them and Why Are They Important?

-written by Jeff Goldenbaum, Dataram Memory Blog Team Member

Comparing Quad-Rank and Dual-Rank Memory Modules

The term “rank” simply refers to a 64-bit chunk of data.  In its simplest form, a DIMM with DRAM chips on just one side would contain a single 64-bit chunk of data and would be called a single- rank (1R) module.  DIMMs with chips on both sides often contain at least two 64-bit chunks of data and are referred to as dual-rank (2R) modules.  Some DIMMs can have DRAM chips on both sides but are configured so that they contain two 64-bit data chunks on each side—four in total—and are referred to as quad-rank (4R) modules.  Quad-rank DIMMs run at a maximum PC3-8500 (DDR3-1066) speed in current architecture.

Why Have Quad-Rank DIMMs?

With each new generation of DRAM chip, quad-rank DIMMs are the least expensive way to achieve the highest density DIMM.  Today, a 16GB dual-rank DDR3 DIMM is built with thirty-six 4Gb (gigabit) chips.  A quad-rank 32GB DIMM will utilize thirty-six “dual-die” (essentially two DRAMs within each physical chip) 4Gb chips.  Until the next generation 8Gb chips become more mainstream and production costs come down, we won’t see a cost-effective dual-rank 32GB DIMM.  But when that does occur, you’ll not only see dual-rank 32GB DIMMs, but you’ll also see quad-rank 64GB DIMMs!

Quad-rank DIMMs actually perform best in Xeon 7500 CPU based systems as the memory BOB (buffer-on board) architecture delivers better performance compared to single- or dual-rank DIMMs of equal capacity.  The maximum speed is still PC3-8500 and is consistent regardless of whether the system has the minimum configuration or is fully loaded.

When to Consider Ranks

Today’s 2-way servers with Intel Xeon 5600 series processors that utilize DDR3 memory technologies have limitations when it comes to how many ranks of memory may be installed.  DIMM slots are configured as “channels” with either 3 channels of 2 DIMM slots each (server models with 12 DIMM slots), or 3 channels of 3 DIMM slots each (server models with 16 or 18 DIMM slots).   Each channel supports no more than 8 ranks of memory.  Simply put, no more than 12 quad-rank DIMMs can be installed in these servers.  To utilize all available slots in servers with 16 or 18 DIMM slots, the use of dual-rank DIMMs is required.

Posted in Memory Posts | Comments Off

ESD: Urban Myth – or Shocking Reality?

-written by Paul Henke, Dataram Memory Blog Team Member

CPUs and memory modules are expensive key components of today’s powerful computers. These components can sustain damage, degradation or destruction due to ESD – electrostatic discharge.  To improve product reliability, prudent electronics manufacturers have employed best practices to manage and eliminate ESD in the supply chain and while manufacturing and handling components.  However, the picture is very different at many end-user sites.

With more IT personnel coming from backgrounds in PC integration, laptop maintenance and overclocking “gaming” environments, there is less awareness—or outright rejection—of using traditional methods to protect delicate electronic components from ESD.

To make matters worse, YouTube contains demo tutorials of technicians plugging electronic components into PCs, workstations and servers without using an ESD wrist strap—the most common item in first line defense against ESD damage.  These videos contribute to the myth that ESD is not occurring, and is therefore not something to worry about.

Remember that semiconductors are made of silicon and silicon dioxide and they do suffer permanent damage when subjected to higher voltages.  Even discharges of as low as 10V can damage certain equipment, and the user doesn’t feel this low level discharge and is unaware that damage has just occurred.  In fact, awareness does not occur until somewhere north of 3000 volts!  This is very different from the wintertime experience of walking across a carpeted floor, then touching the metal door knob and experiencing that memorable “ZAP” spark and shock.  That jolt can be over 20,000 volts!

As an ISO9001 company, Dataram has incorporated and used ESD prevention methods since they were first introduced in electronics manufacturing over 30 years ago.  Quality and reliability are of utmost importance at Dataram, and our memory module manufacturing facilities reflect that commitment.  Since Dataram guarantees DIMMs to work forever in the system—it’s in our best interest to make sure we don’t have to replace more than the expected, so to keep our costs low!!

For the same self-serving reason, IT managers should create an ESD free work area and implement best practices for handling components by all service personnel.  Successfully eliminating ESD damage in your datacenter will enable trouble-free operation of your servers, especially critical CPUs and DIMMs for many years to come!

Server manufacturers HP and IBM have written Support documents outlining Best Practices for the proper handling of server components.  Our Customer Support engineers provide these documents as educational tools when proper handling of DIMMs is the topic of discussion.  If you would like to learn more, send us an email: support@dataram.com  And remember, damage from ESD is NOT a myth!

Posted in Memory Posts | Comments Off

The “Three-Legged Stool” Now Has a Fourth Leg

-written by Tom Mayberry, Dataram Storage Blog Team Member

If you talk long enough to any development or project manager, they will eventually tell you about the “three-legged stool” or some other similar metaphor. The three legs of the stool refer to the three basic parameters that are available to them for controlling their projects. These are Schedule, Functionality and Cost.

The trick is to find the optimal “length” of each of these legs without allowing the stool to tip over. For example, the schedule for a project can usually be brought in by increasing the cost. That is, adding equipment and people to a project at the right time and place will shorten time-to-market. Similarly, reducing the functionality of a product will also allow it to be brought to market sooner.

The first challenge for a project manager is to determine how far to manipulate any one parameter. Remove too much functionality and you don’t have a product worth shipping; add too much and you’ll risk missing your market window. The second challenge is to determine when the altering of one parameter stops affecting the others. After a while, adding additional resources will no longer enable a project to complete sooner…in fact it may actually cause it to become later. Any miscalculation could affect the balance of our stool, resulting in a less than optimal project plan.

With many of today’s products, project managers must now pay special attention to a fourth project parameter: Quality. For these products, it does no good to bring a product to market on schedule, within required costs and with the required functionality, if the quality is poor. This is especially true with most Storage products. Loss or corruption of data or loss of availability is not acceptable. Quality is the fourth leg of our stool.

So, my suggestion is to not let product quality be an afterthought. Do not make it a property of secondary importance. Manage it as you would the other three primary project parameters. After all, if the quality leg is too short, all your stool will do is wobble.

Posted in Storage Posts | Comments Off

Why ISO 9001?

written by Jim Hampshire, Dataram Memory Blog Team Member

At Dataram, all new employees from executive management to touch labor are provided a training session entitled, “ISO9001 QMS (Quality Management System) OVERVIEW AND AWARENESS”.  I have been providing that training session for over 10 years and am often asked, “Why have an ISO 9001 system?”  Hopefully by the end of the session, they are provided an answer to that question, but here I will attempt to answer it in a few paragraphs.

The ISO 9001 standard has now been in existence for over 20 years and is still increasing in popularity.  There are now over 1 million companies globally with certifications, and that number has increased every year since it inception.  In recent years there has been a rapid growth in China, which now accounts for approximately a quarter of those certifications.  In fact, ISO 9001 is the world’s most-used Management System Standard.

There must be some benefit to be gained, or why would organizations continue to expend dollars and labor hours in an effort to verify ongoing compliance?  The answer is simple – it is because the principals of the standard deal mainly with three process-oriented goals: Customer Satisfaction, Continual Improvement and Utilization of Proven “Good Business Practices”.  Those are three goals that would head the list of any reputable company’s objectives.

As an individual who has been in the electronics manufacturing Quality/Test arena for over 30 years, if there is any concept that I fully support it is that reliable product and ultimate customer satisfaction must originate from a process approach.  Quality is not simply a group of individuals or a department but rather an organizational philosophy, a way of doing business.  Adherence to the principals and intent of the 9001 standard has proven to be one effective way of accomplishing that philosophy because it deals with a company’s overall Management System.

ISO 9001 does not define the acceptance levels of your product or service, but helps you achieve consistent results and continually improve the process.  If you can produce a good product most of the time, this helps you make it all of the time. It’s just a series of proven basic business practices.

Of course, the benefits of employing an ISO 9001 system within an organization can only be fully recognized if it is treated seriously, truly enforced and believed in by all personnel, starting with executive management.  At Dataram, we deem our ISO 9001 compliance to be a top priority with all affected personnel’s involvement mandatory.  We use it as it is intended—as a tool to continuously improve, with enhanced customer satisfaction as the ultimate objective.

Posted in Memory Posts | 9 Comments

Memory and Storage Technologies Begin to Blur

-written by Jason Caulkins, Dataram Storage Blog Team Member

For the last 50+ years, computing has been out of balance. What I mean by that is that there are three fundamental pillars of computing: CPU, Memory, and Storage. You don’t have a very interesting computer without these fundamental bits. Yes, I know you need an IO system as well, but let’s call that a given. While the CPU and memory have enjoyed the benefits of Moore’s Law, the Storage Pillar has languished. It’s a slave to physics, due to the mechanical wonders of the rotating disk and moving head.

Thus, you have two skinny (fast) pillars and one fat (slow) one. This yields an imbalanced system. The software and systems folks have worked very hard to hide this imbalance to the user. They have, by and large, succeeded. However, in the era of MySpace, Facebook, Twitter, and Zynga, the fat pillar can no longer be ignored, like the elephant in the server room that it is. Nifty caching schemes have been developed on the server side to compensate for the relatively poor storage performance. These schemes are great, but are costly and just move the burden of storage performance into the server, which is an unfortunate price to pay for what should be a stand-alone aspect of the compute process (for now).

Long ago, someone figured out that you could use solid state core memory to store data long-term. They also figured out that in order for such a device to be useful, they had to wrap it in electronics to make it look like a disk interface, so as to be compatible with all the existing systems out there.

Now, fast forward a few decades and we have DRAM and flash technology far denser and cheaper than ever before. All that has to be done is to figure out what kind of electronics we should wrap it in so that it can be compatible with all the very numerous existing systems out there. Easy, right? Make it look like a disk, just like the old days. This works really well for a number of reasons. First, disks are modular, and designed to be replaced if required. This is handy for replacing failed units, expansion, and upgrading. This is fitting for the technology available today (flash). Soon, however, a new memory technology will become available and replace flash as well as DRAM. Once this occurs, there will no longer need to be a disk type interface at all. Long term and short term storage will be handled by the same technology.

This means that the storage technology will no longer need to be wrapped in drive (or other legacy bus) technology in order for it to be useful. It will all reside as directly accessible by the CPU (or bridge chip, depending on how technology flip-flops in the next 10 years). The other cool thing is that you will have a common storage/memory space. This means that the three pillars will effectively become two. There will probably remain a need for mechanical disks for archive, but online and even near-line storage will just occupy some of your main memory.

Which brings us back around to the concept of storage living in the PC or server. The exact thing that I said was unfortunate earlier. What will make all this manageable is software. Storage managed properly can be located anywhere in the connected world.

More on that later.

Posted in Storage Posts | Tagged , , , , | 2 Comments