Storage, Networks and Budgets

-written by G. David Felt, Dataram Storage Blog Team Member

In today’s worldwide economy, companies are asking IT to do more with less while keeping things current.  We have the SAN, the network and servers which IT groups are more often than not being told to upgrade while holding cost down or living with what they have.

With the introduction of 10, 40 and 100 gig connectivity, IT management is looking at ways to keep their storage up to task with the growing demand of data and also keep their networks moving along quickly.

FCoE (Fibre Channel over Ethernet) has become a focal point for convergence of the two networks into one large network.  If you are in the SMB (Small, Medium Business) arena, you might find it hard to justify a SAN to management and yet if you have grown into the entry level of Enterprise you may still find the cost to be a headache when other areas of IT are demanding dollars.

One solution to consider is refurbished Fibre Channel storage with the ability to accelerate performance.  Quality refurbished FC arrays make an excellent storage solution for company growth when paired with Dataram’s XcelaSAN.  This allows you to gain solid secure storage with high performance for a fraction of what new enterprise class storage can cost.  On top of this, the ability to accessthe storage from the network if FCoE is in the setup allows companies to move from slower storage such as NAS or iSCSI direct attached storage to the growth potential that FC SAN-based storage has to offer.

Considering non-traditional options like this enables IT to meet business requirements: holding down cost while upgrading storage and networks to support business growth.


Posted in Storage Posts | Comments Off

Errors…Corrections Made in an Imperfect World

- written by Brian Cook, Dataram Memory Blog Team Member

Nothing in life is error-free, and computers are no exception.  Computer systems are designed to be very reliable, and to give years of trouble-free dependable service.  However, occasionally hiccups can and do happen.  In the world of computer memory, errors can occur.  Fortunately, many errors are harmless and do not repeat.  They are corrected by the memory controller via error detection and correction (ECC) algorithms and technologies.  Furthermore, memory is one of the most reliable components in a computer system—much more reliable than their mechanical hard disk drive storage cousins.

Working in the Technical Support department at a memory company, you encounter pretty much every type of system and memory problem one can imagine.  Sometimes there is an easy solution to an error message (re-seat or swap a DIMM) and sometimes you can troubleshoot a system for hours, only to find out that the problem isn’t memory related at all!

At Dataram, we go to great lengths in our ISO 9001 certified facility to design, manufacture and test reliable compatible memory that delivers great performance.  However, on the rare occasion that you do come across a memory related error, the following information may be useful to you.

Basically you can break down memory issues into two types.  Installation failures can occur right after you install or upgrade the memory, and operational failures, which occur farther down the road, after the system has been running solid for some time.  Let’s take a look some examples of failures, and how to address them:

Problem:  The system will not pass POST (no video or beeps).

Solution checklist:

  1. Ensure that you have the correct memory for the system you have installed it in.
  2. Check the user manual or contact Dataram Product Support to ensure that the memory is installed in the correct slots.  (Sometimes larger capacity quad-rank DIMMs need to be installed in a certain order.)
  3. If you are mixing capacities within a system, make sure you follow the user manual to ensure the DIMMs are installed in the correct slots.
  4. If you are sure the DIMMs are configured correctly, reseat (remove and re-install) the memory.
  5. Ensure you have the latest version of the system firmware/BIOS/open boot prom rev level.

Problem:  The system passes POST but the BIOS does not count or recognize all of the memory.

Solution checklist:

  1. Reseat the memory.
  2. Ensure you have the latest version of the system firmware/BIOS/open boot prom rev level.  Older firmware can cause systems to recognize brand new 4GB or larger DIMMs because they were not available when the original firmware was written.
  3. If unsuccessful, try solutions 2 and 3 in the next problem.

Problem:  The BIOS counts the memory correctly and passes POST, but a DIMM or bank of DIMMs is deallocated or disabled by the system processor or OS.

Solution checklist:

  1. Reseat the memory.  (I am starting to sound like a broken record).
  2. Once you boot to the OS, you should see which slot(s) are deconfigured or disabled.  Replace the DIMM(s) with a spare(s) and that should correct the problem.
  3. If your OS does not display deconfigured memory, you will need to replace one bank at a time until you find the suspect bank.
    Note:  Banks can also be referred to as quads, channels, or groups of DIMMs.

Problem:  The system has run for an extended period of time, but now has an issue or crashes.

Solution checklist:

  1. Check the error logs to see if a DIMM location has been identified with an error.  Error types includes:
    • Correctable Error (ECC).  The system is still operational, but the event is logged.


    • Correctable Error count has exceeded the threshold set: (types include intermittent, persistent, and sticky).  This DIMM should be replaced as its behavior indicates it is approaching end-of-life.


    • Chipkill Error.  This is a multiple bit error that has been corrected.  If repeated, this DIMM should also be replaced.


    • Uncorrectable Error.  None of the existing Error Correcting technologies could correct the event.  This DIMM should also be replaced.


  1. To keep your system error free and extent its life, ensure it is receiving sufficient air flow and cooling.  The system fans must be functioning properly, and the airflow vents must not be blocked.  If equipped, make sure the system airflow baffle is properly installed. Monitor the various areas and racks of the datacenter to ensure there are no “hot spots”.  Make sure the inside of the system and the airflow vents are clean.  Dust particles and hair strands can lodge in a memory DIMM socket causing failures.

These tips and solutions will fix most of the problems you encounter.  However, if you’re still having trouble, contact our Product Support team at or give us a call at 1-800-599-0071, or 609-897-7014 for our international friends.

Posted in Memory Posts | Comments Off

Have You Hugged Your Appliance Today?

-written by Ed Esposito, Dataram Storage Blog Team Member

In the not so distant past, an appliance was something you got free when you opened up a bank account. Or, you used one to burn some bread or whip up your favorite adult beverage.  But, more recently we are seeing more IT function moving to application-specific appliances.  This is easily visible in the Data Storage segment.  Over the years, a lot of function has been moved from Host/Servers down to Storage Arrays and then back again.

So while the washing machine continues on spin cycle moving function between Storage and Hosts, a third option becomes apparent—moving key functions of Data Management and Performance Management to specialized appliances.  Most major storage vendors have adopted some form of an appliance strategy to satisfy certain requirements.  Some examples are: De-Dup appliances, Virtualization appliances, Tiering and Caching appliances. So the question remains – where does all this function really belong?

The question of best placement for all this functionality may never be completely answered, but one thing is certain—packaging all of this functionality and performance inside a large storage array drives up the cost.  Customers are constantly making storage decisions based on the viability of that functionality versus the life cycle of the array.  Will the machine be fully depreciated or will the never-ending co term leases be over well before or after the functionality is needed or obsolete?  In essence, IT managers are faced with the decision to buy a new and larger house to make space for function/performance features that would be better served to be in a stand-alone appliance.

Having an outboard in-band cache appliance dramatically alters and simplifies the decision process.  First, it separates performance from all other bells and whistles.  Second, it lowers the cost of the backend storage and creates a new, less costly, more flexible storage infrastructure. An infrastructure that allows for a high performance, on-demand model of data placement.

So give your cache appliance a hug today, while always remembering that no one ever complained about storage moving too fast. 

Posted in Storage Posts | Comments Off

RAM Powered Virtualization Behind the Cloud

-written by Michael Stys, Dataram Memory Blog Team Member

In today’s business environment, it is becoming more of a common place to find IT Managers being asked to support more with fewer resources. While at the same time, these IT professionals are operating with a flat or shrinking IT budget.  This is one of the reasons why IT departments are deploying creative solutions that will lower overall long-term operational costs and provide innovative, scalable solutions to address the changing landscape of how their users obtain information.

One of these innovative concepts is cloud computing.  By centralizing infrastructure and applications, IT administrators are now able to more effectively support their clients and deliver content to an increasing number and types of devices. It is like corporations have created a new modern day mainframe consumed with information, and the smart phones and tablet PCs are the terminals we use to access this wealth of information. I guess we are building a smarter planet!

One of the main driving requirements at the center of cloud computing is processing power. For servers to run effectively and provide this increasing demand for processing power, servers require a minimum configuration of RAM. As applications are added to these servers more RAM is then required to maintain operational efficiency. This is why an increasing amount of datacenters are moving towards virtualization to increase the amount of VMs (virtual machines) that can run applications on a single hardware server. The short version is virtualization enables IT administrators to properly allocate the amount of hardware resources required by a specific application. Evidence of this relationship between the power of increased memory and virtualization can be found with VMware’s recent licensing update for vSphere VMware vSphere 5 Licensing Update .

Thankfully, RAM on a $ per GB basis continues to drop. For example, many servers that were purchased only a couple of years ago were sold with 4GB DIMMs. At that time the cost of 8GB DIMMs was too high and typically resulted in the cost of the server doubling. So, the IT department purchased two servers with 4GB DIMMs instead of one server with 8GB DIMMs. Today the price of an 8GB DIMM is at a price comparable or lower than the 4GB DIMM was a few years ago. Or, if you are really looking to maximize your server memory, most of the newer servers will now support16GB or 32GB DIMM options.

I believe in getting the most use out of any asset that I already have and then purchase new assets that will enable scalable growth into the future. If you have a server that is a year or two old, give it new life by maxing out its memory or replacing lower capacity RAM for higher capacity RAM. Then when you want to purchase your next server, look to Dataram to provide you with lower cost, higher capacity RAM.

Posted in Memory Posts | Comments Off

What Do We Want in a Software Development Process?

-written by Tom Mayberry, Dataram Storage Blog Team Member

Whether you’re developing the software for SAN Optimization Appliance or for a Sales CRM system, most of us would agree on the importance of using a well-defined process to develop that software. There are many standard processes to choose from, including ISO, CMMI, RUP, Six Sigma and TQM, just to name a few. Since there have already been volumes written on the relative merits of each of these process methodologies, I would like to consider something different. What exactly do we want in a software process? I think you’ll find that it isn’t really much of a secret. The following is a list of the key attributes to consider when developing or selecting a software development process:

  • Usefulness – We, of course, want a process that satisfies the needs and wants of the developers. That is, a set of requirements must be established before acquiring or creating a process.
  • Cost Effectiveness – The cost of process development and process execution must be considered to insure that it satisfies the business requirements of the organization and the needs of the developers.
  • Maintainability – The process must be easily modifiable to fix issues or to react to changing conditions.
  • Extensibility – We must be able to add features to our process as new requirements are revealed.
  • Usability – Our process must not be so complex that it becomes unusable by all but the experts. Ease of use is an important attribute of any process.
  • Efficiency – This is a measure of how quickly the process can be executed in order to solve the particular developer’s problem. This can also be considered the “performance” of the process.
  • Reliability – Finally, our process must be reliable. It must yield the same positive results regardless of the developer or the conditions in which it is being used. Reliability is often expressed as dependability or quality.

As I would suspect, these attributes are probably already familiar to you. If you were to re-read the bullets above but this time replace the word “process” with “product” and the word “developer” with “customer” this would look a lot like a list of the key attributes for the software product under development. Although the resulting deliverables and artifacts may be different, the characteristics of a product and the process used to develop it are not really much different.

So when it’s time to choose a software development process (or to produce a product for the marketplace), keep the above attributes in mind. They will serve you well in either activity.

Posted in Storage Posts | Comments Off

Green Upgrades Save on Datacenter Costs

-written by Colin Pittham, Dataram Memory Blog Team Member

Recent reports have indicated that datacenter power consumption in the US has almost doubled in the last five years and is set to continue at a similar rate in the years to come. With roughly 2W required to cool every watt of power consumed and with energy cost also increasing, companies are facing potentially huge increases in the costs of running their datacenters.

Clearly it is important to understand the overall energy efficiency of the datacenter. Significant savings can be made by adopting greener cooling and lighting solutions. Equally, effective use of new server technologies and careful adoption of high density infrastructures will also help reduce energy costs.

But save for replacing all existing servers with new low energy server solutions, what can be done with the existing infrastructure? Virtualization is a popular strategy—particularly on x86 servers— to improve utilization and make more cost-effective use of the server infrastructure. Unfortunately, virtualization is also a big consumer of memory and memory has fast become the largest power consumer in a typical server. It seems that the drive for efficiency has a significant energy cost implication, too.

Customers are clearly seeking more power-efficient solutions and green memory products are becoming a top shopping list item for upgrades to current infrastructure. Today, at the lower memory density range for x86 based systems, customers can expect to pay between a 10% and 20% premium for lower voltage DIMMS. In contrast, the newer and harder-to-source higher density DIMMs are commanding as much as a 100% premium in price for the lower voltage product. Adoption of these products is understandably slower today, but such is the demand for energy efficient upgrades that customers have indicated that once the premium has dropped to the 50/60% mark, take-up will significantly improve.

This is not surprising as green DDR3 consumes around 40% less power compared to DDR2 solutions. Further, recent studies have shown that moving to green DDR3 can provide savings in excess of 2 Megawatt hours of electrical power per year per server. Green upgrade options are not only cost-effective from an energy perspective, but they also extend the life of your current architecture and are a lot easier and quicker to install than a bunch of new servers.

Posted in Memory Posts | Comments Off

Deep Caches Can Unstir the I/O Blender

- written by Patrick Kelsey, Dataram Storage Blog Team Member

“I/O blender” is a colorful term that was coined to describe a phenomenon that is often seen in virtualized environments, where you have many virtual servers sharing a single storage array.  In such environments, the many sequential or semi-random I/O workloads generated by each virtual server collectively appear to the storage array as a heavily random workload, often dominated by writes.  The storage array has a limited ability to maintain performance in the face of a heavily random write-dominated workload, especially in the mid-range, where the amount of cache available on the array controller is typically in the single-digit gigabytes.

Inserting a high-capacity caching appliance like our XcelaSAN into the SAN between the virtualized servers and the storage array greatly improves the situation.  With XcelaSAN you now have a quarter terabyte or more of cache absorbing that heavily random write stream at DRAM speeds, followed by deep sorting and careful feeding of the backend array with sequential data. IOPS and latency performance improves, and previously over provisioned arrays can be more fully utilized.

Posted in Storage Posts | Comments Off

Why use JEDEC Compliant DIMMs?

- written by Phil Muck, Dataram Memory Blog Team Member

Not all DIMMs are created equal.  That is where the JEDEC Memory Module committee JC-45 comes into play.  The JEDEC JC-45 committee is comprised of major DRAM manufacturers such as Samsung, Hynix and Micron and major DIMM manufacturers such as Dataram.  The committee is responsible for the creation of memory module standards, including various DIMM types such as Unbuffered DIMMs, Registered DIMMs, Buffered DIMMs, Low-Profile DIMMs and SODIMMs.

DIMM families such as DDR, DDR2, and DDR3 were all developed and standardized by JEDEC. Future DIMMs such as DDR3 Load Reduced and DDR4 are now in development.

Before a DIMM becomes a JEDEC standard memory module, months of up-front development take place.  All the topics listed below are reviewed and discussed.  If changes are required, adjustments to the design are made and the review process is repeated.  The final design is voted on and approved by the JEDEC members before a DIMM becomes a JEDEC standard.

  • DIMM Configuration
  • Connector Pin Outs
  • DIMM Dimensions
  • PCB Traces Topologies
  • Clock Stub Lengths
  • Impedance Controls
  • Power Plane
  • PCB Stack-Ups
  • Simulation
  • System Verification

Non-JEDEC compliant DIMMs may have an initial low up-front cost, but the long term costs of unstable systems errors, system crashes and downtime outweighs that initial saving.  Skipping any of the above steps could have devastating consequences to the data integrity of your workstation, servers and blades.  Incorrect PCB stack-up may cause crosstalk issues. Incorrect impedance may cause signal integrity issues.  Inadequate power plane designs may cause random ECC errors.  A DIMM is more than some DRAMs on a PCB.  A correctly designed DIMM has hundreds of hours of engineering behind it.

Why use JEDEC compliant DIMMs?  DIMMs developed in accordance with JEDEC memory module standards assures 100% compatibility in today’s computer servers, desktop systems, telecommunication and other consumer and business applications.  A DIMM manufacturer that is an active JEDEC member understands the steps required to make a reliable DIMM and they also understand the consequences of taking shortcuts.

Posted in Memory Posts | Comments Off

Companies Achieve Major Cost Reductions by Re-thinking the Storage Architecture Model

- written by Phyllis Reiman, Dataram Storage Blog Team Member

Traditional storage architecture practice often includes a re-occurring cycle of technology refresh or upgrades. An underlying premise of this model is that storage performance and optimization issues are best resolved by installing new, expensive storage systems and/or faster disk (SSD or SAS).  It’s a common preconception that the IT storage budget must include significant capital expense to replace storage systems every 3 to 4 years with the latest, greatest technology in order to support business application demand and the associated burgeoning data growth.

This model stems from the past when storage systems were less scalable and had a shorter reliability lifecycle.  Historically, maintaining storage systems beyond a 3 year window could lead to serious business outages due to hardware failure and data recovery – an unacceptable risk to the business.  Although tiering solutions in the form of HSM (Hierarchical Storage Management) were available more than twenty years ago, they generally used two inflexible tiers: disk and tape.

The reliability and scalability of today’s data storage systems enable adoption of a storage architecture model that houses data in the most cost-effective and performance appropriate location.  In other words: companies can now match the business value of their data to the expense of storing it.  This is welcome progress in an era of flat budgets and corporate mandates to deliver cost-effective IT solutions that link to busines objectives.

Organizations that adopt this approach to storage architecture design and deployment achieve major cost reductions by leveraging and better utilizing existing storage assets.  Today the reliable life of storage systems is at least twice that of the past, and recent advances in storage management solutions such as virtualization, caching, tiering, deduplication and archiving allow IT management to re-think traditional practices.  Aligning storage technology acquisiton and cost-structure with strategic objectives is now an achievable goal – one that enables IT to attain cost-effective results that support business growth while delivering new applications that enhance competitive advantage.

Posted in Storage Posts | Comments Off

Protect Your Computer by Being Grounded

-written by David Sheerr, Dataram Memory Blog Team Member

If you’ve ever gotten a static shock after walking across the living
room in your socks on a cool dry winter morning, you know just how
easy it can be to unintentionally build up an electrostatic charge.
Once you’ve built up a static charge, the instant you touch something
conductive, an electrostatic discharge, or ESD occurs.

When you touch a door knob after building up a static charge, the
resulting ESD causes no harm beyond surprising you. When you touch a
computer component after building up a static charge, the results can
be disastrous. An ESD of just 10 volts can damage the silicon-based
chips used in most computer components. Your motherboard, RAM, CPU,
and graphics card are all susceptible to this type of damage, and even
a low voltage ESD can render these components completely useless.

To make matters worse, you could release a 10 volt ESD without knowing
it. It’s entirely possible to build up and release a small charge
without ever noticing it, or feeling the shock associated with larger
ESD’s. That’s why it’s extremely important to ground yourself before
doing any work inside of your computer.

There are a few different methods you can use to ground yourself while
working on a computer. An ESD wrist strap keeps you from releasing an
electrostatic discharge by keeping you connected to a metal object at
all times. The strap consists of a metal clamp attached to a wire that
gets connected to a wristband with a metal plate on the inside of it.
By attaching the metal clamp to a piece of metal such as the inside of
an aluminum computer case you’ll have a constant connection between
your skin and that metal. By putting on the wrist strap before
touching any computer components, you can ensure that any ESD will
occur between you and the metal you’re attached to rather than you and
the inside of your computer.

An ESD resistant wrist strap is the ideal method for preventing an ESD
from ruining your computer, but it’s not the only method. You can also
touch a metal object prior to touching a computer component to ensure
that the component will remain safe. If you choose to use this method,
you’ll have to touch metal each time you want to touch a computer
component. It’s easy to forget to touch metal before touching a
computer component, making this a risky method for preventing ESD.

There’s nothing more exciting than firing up your computer after
making major upgrades to it, and there’s nothing more frustrating than
discovering that the new component is dead. By taking steps to prevent
ESD, you can ensure your new hardware will work properly once

Posted in Memory Posts | Comments Off