Dataram’s DRAM Device Qualification Program

~written by Jim Hampshire, Dataram Memory Blog Team Member

At Dataram, we use a “Device Qualification Program” for DRAM (Dynamic Random Access Memory) components, as well as all other active components used on our product.  In this process we place only approved specific manufacturer part numbers into our PLMS (Parts Logistics Management System). Only those defined manufacturer part numbers are permitted to be used.

These devices are qualified by Design Engineering using processes which parallel the original design validation.  This method guards against using unauthorized devices.  Manufacturer part numbers are verified against the Dataram PLMS at the time of receipt and then again at each production lot’s first piece inspection where a component ID is performed.  Although this process requires constant monitoring and verification testing, this system provides protection against any possible approved DRAM part number design variations.

The process is a three-stage program, (1) review of the manufacturer’s technical data sheet, (2) qualification testing on DATARAM products, and (3) review/re-qualification of any changes to the DRAM.

The program begins with an engineering review of the manufacturer’s data sheet.  This is followed by the procurement of a sample quantity of the DRAMs for qualification.  Memory module(s) are selected for the qualification based upon their complexity and the availability of the systems to test the DRAM.  Sufficient quantities of these memory modules are assembled to fully evaluate the devices during the testing process.

The qualification test program parallels the original design verification test. The samples are subjected to initial functional testing and verification of timing and waveform integrity on the Memory Tester.  The samples are then inserted into the target system(s) and are operated from a minimum configuration to a maximum server/workstation configuration.  Wherever applicable, Dataram memory modules with DRAMs obtained from other manufacturers are also introduced with the Dataram memory modules under test in various inter-bank and intra-bank combinations from the lowest to a fully loaded capacity in order to verify compatibility.

After acceptance of a manufacturer’s DRAM PN, we continuously review all changes made by the manufacturer to that part number.  Any changes to the part and/or die revision automatically require a re-qualification prior to use.

 

Posted in Memory Posts | Comments Off

Inventory Control – What is MRP and Why Do We Use It?

~written by Nick Bukaczyk, Dataram Memory Blog Team Member

Often, employees throughout a company hear such words as purchase order, sales order, shortages, expediting, due dates, forecast, demand, kits, material, inventory, data and bill of material.  They all come together through MRP.

MRP (Material Requirements Planning) is the computerized ordering and scheduling system used by manufacturing and fabrication industries.  It uses bills of material, sales orders and forecasts to generate raw material requirements (components/parts).  It also gathers new order requirements as they come in, presents shortages if they exist and suggests ordering/building when necessary based on data gathered.

Many people throughout an organization contribute to the MRP process:

  1. Sales – enters orders which creates a finished goods requirement
  2. Production Control – reviews inventory levels and sales requirements, then provides manufacturing with work orders to satisfy demand
  3. Purchasing – reviews component stocking levels, forecast and sales orders, then generate purchase orders for raw material to satisfy demand
  4. Receiving personnel –  as raw goods arrive, items are received into inventory to show component availability for  manufacturing orders
  5. Stockroom personnel –  kit work orders, perform transfers and cycle count to maintain inventory accuracy so purchasing and production control see up-to-date availability
  6. Shipping personnel – relieve finished goods inventory and satisfy the sales order demand when shipping/closing orders

An MRP system is used to simultaneously meet three main goals:

  1. Ensure material is available for production and finished goods are                           available for delivery to customers while minimizing inventory levels
  2. Maintain certain stocking levels as dictated by company philosophy
  3. Plan manufacturing and purchasing activities with delivery schedules, lead-times and sales

MRP is used to guide the company in its daily inventory activity. It helps us maintain our standards to consistently provide customers with on time deliveries and high quality product.

 

Posted in Memory Posts | Comments Off

Why Should You Upgrade Your Memory?

~written by David Sheerr, Dataram Memory Blog Team Member

It’s no secret that upgrading your computer’s memory most often improves performance, but have you ever stopped to wonder why that is? Before you can understand why a RAM upgrade improves your computer’s performance, you have to understand the role memory plays in your computer.

Imagine for a moment that you’re sitting at a small desk in an office somewhere… unless you’re actually sitting at a small desk in an office right now – if that’s the case imagine yourself sitting at a small desk on a tropical island. You deserve it. Now imagine that a man walks up to you and hands you a pile of unsorted papers. Each paper is colored red, blue, green or yellow, and the man demands that you sort the papers by color.

You try to do as he says, but your small desk will only accommodate one pile of paper at a time. In order to complete the task you have to sort one color at a time, hand it off to the man, and then start on the next color.

If you had a larger desk, you could sort all four colors at once, thus cutting the total time the task took in half.

Think of your computer’s RAM as its desk. With more RAM, your computer will have a larger “work area”, allowing it to perform more operations simultaneously.

To put it a bit more technically, each time you start an application to work with a file, your computer has to load the application program as well as the data file to be edited into RAM. The more memory the computer has, the more work it can do at one time, which results in increased speed and performance.

So, why should you upgrade your computer’s memory? To give it a bigger workspace, and allow it to run faster. So go ahead, get your computer a bigger “desk” – it’ll thank you for it, and you’ll thank yourself for making your computer experience a whole lot better.

Posted in Memory Posts | Comments Off

DDR3 Server Memory: LV 1.35V or Standard 1.5V–What Shall I Choose?

~written by Nelson Rodriguez, Dataram Memory Blog Team Member

DDR3 Low voltage 1.35V RDIMMs are becoming mainstream in today’s x86 servers featuring Intel’s Xeon 5600 series CPUs—also known as “Westmere”.    Standard DDR3 DIMMs run at 1.5V.  DDR3 memory technology has evolved since it became mainstream in 2008, with a series of die shrinks each resulting in lower power consumption.

By “going green”, let’s compare power consumption in a 96GB capacity server.  First generation 50nm DRAMs would draw about 65W of power.  A die shrink to 40nm lowered power consumption to under 43W, for about a 34% reduction.  By again shrinking the die to 30nm technology AND lowering the voltage to 1.35V, this results in an additional 21% power reduction, down to under 34W.  These comparisons are from our friends at Samsung, the world’s largest maker of DRAMs.  As you calculate the power savings over an entire datacenter with large numbers of servers, the savings become substantial.  The advantages of LV (PC3L-) RDIMMs are obvious for servers, so what’s the catch?

Performance users deploy servers with one item in mind—extract the maximum application performance possible, and use IT infrastructure as a competitive advantage.  This is especially true with our customers involved in the financial services industry.  Securities trading in particular are most concerned about low-latency operation and want the fastest possible CPU and memory speeds.  After all, on Wall Street, “TIME IS MONEY”.

Users who have high performance needs of DDR3 RDIMMs must be aware of, when you populate more than one PC3L-low voltage RDIMM per memory channel, a DDR3-1333 speed DIMM will clock down to DDR3-1066 memory speed.   However, the 1.5V standard voltage DDR3 counterpart will run at the full DDR3-1333 memory speed with 2 RDIMMs per channel in Xeon 5650 or higher based servers.  Performance users are better off selecting standard 1.5V RDIMMs when populating up to 12 RDIMMs in a 2-way server.

Which is best—maximum power or maximum performance?   As the customer, you decide based on your needs.  Dataram has the full complement of memory options to enable you to go any way you choose.  Dataram’s team of memory specialists and our industry-leading customer support group will help you choose!

Posted in Memory Posts | Comments Off

Wire Speed Storage

~written by Jason Caulkins, Dataram Storage Blog Team Member

In my last blog I discussed how storage would evolve into something that looks a lot like main memory today.  The challenge of storing data inside a server (for use in a scale-out datacenter, for example), is that you need to maintain data coherency, availability, performance and scalability, despite the inevitable hardware, communications and human errors (oh, yeah – and don’t forget cost and usability).

This is a very tall order.

So, while we cannot solve this equation today, we can at least break it down into some byte-sized bits.

Let’s start with coherence and performance.  These are nearly mutually exclusive in a large, geographically disperse environments.  The issue is really about latency.  In addition to the latencies associated with connection speeds between remote sites, there are other, programmatic latencies to deal with.  These are associated with ensuring atomicity.

Today, a file or block of data is locked when accessed by a node or application, to avoid changes and data corruption if another node or application is trying to access the same data at the same time.  While this ensures coherency, the performance of the waiting node can suffer greatly, depending on how long the first application holds the lock and the efficiency of the locking mechanism itself.  In the worst cases, you can end up in perpetual locks, which is a bad thing.

A better approach is to implement a journaling system where both nodes or applications are allowed to view/modify the data at the same time, but keep track of the changes so that another process can check for collisions, and roll back the changes as required.  The idea here is that collisions are rare, so why pay the expensive locking penalty for every transaction?  The down side of the journaling system is that it takes up storage space keeping track of redundant data changes.  However, all one needs to do is compare the price of capacity vs. the price/benefit of performance.  For critical transactions (stock trades, financial transactions), the collision detection/repair mechanism must find and fix collisions before a transaction can be committed, adding yet another variable to the equation.

The fundamental issue here is tradeoff.  Physics and economics are at odds.  There is no free lunch.  In order to be low latency, you have to pay for things like solid state and very fast connections between sites.  These systems have a great price/performance metric, but this makes the price/GB attribute go way up.  If you just need bulk, local storage, slow, cheap, high-capacity mechanical drives provide the best price/GB, but have terrible price/performance metrics.

As long as there are different storage workloads and different fundamental storage technologies, it makes sense to tailor the storage system to the workload.

To further muddy the waters, storage workloads are vastly different in the same compute environment.  Even the same applications change their workload characteristics depending on the number of users, the amount if non-storage system resources, and even on version of the application being used.

So the trick to solving this problem is to create a storage environment that has all the characteristics required to address the various workload, performance and economic challenges presented by the applications.  This means that an advanced storage infrastructure must have elements of high price/performance as well as elements of low price per capacity.  It must be intelligent enough to dynamically assign resources as the workloads demand, and must support the ability to modularly add performance elements as well as capacity elements not only on demand, but in a predictive manner, so that the system always provides just the right price/performance for a dynamic application workload driven by ever-changing business workloads.

Posted in Storage Posts | Comments Off

Building a White Box Server?

~written by Bob Crane, Dataram Memory Blog Team Member

Whether you’re a system integrator, OEM or in IT, when building high quality white box servers it is important to deliver a reliable system.

Memory is a big part of this, as picking a memory manufacturer who can provide this is crucial.  When selecting a memory vendor, consider the following:

1)      Bill of material control—Will the supplier provide a memory module that offers a lock down on the DRAM components and manufacturer? Consistent product leads to better reliability and system stability.

2)      High quality DRAM components from a tier 1 DRAM vendor—Going with the highest quality DRAM insures best reliability and longevity.  Some vendors will use a lesser known vendor compromising quality for price.

3)      Motherboard testing—Does the vendor utilize independent test labs such as CMTL who test Intel motherboards?  This ensures the memory has been tested extensively including thermal, operating system and compatibility. Example: If you have any problems and you’re using tested motherboard memory, Intel will support the call you place.  If you do not, you will be asked to remove it and put tested memory in its place.

4)      Support—Does the vendor provide technical expertise to ensure the right memory is designed in?  If there are any issues, will they respond in a timely manner, provide a lifetime warranty and troubleshooting.

In summary, when building a white box server you need to design in high quality components, including memory.   Know what you’re getting and make sure the vendor can back it up.   Going with the lowest cost is not always best.   Dataram is a leading memory manufacturer in the white box server space and offers expertise in this marketplace.

 

 

 

Posted in Memory Posts | Comments Off

Storage Tiering with Flash and DRAM

-written by Tom Mayberry, Dataram Storage Blog Team Member

When considering the storage hierarchy, it is important to understand why it makes sense to buffer the Flash tier with DRAM given the price, performance and properties of the various solid state technologies available today. The primary solid state elements in the storage hierarchy are DRAM and Flash.

DRAM is faster than Flash (by up to 100x), and doesn’t “wear out” like Flash does. There is a finite amount of read/write cycles associated with NAND Flash, and that doesn’t exist with DRAM. Flash has struggled with data integrity (MLC vs. SLC), which contributes to its slower adoption. But Flash is non-volatile (a big plus), vs. DRAM which loses all data when power is removed.

In addition to its non-volatility, the pluses of Flash are that it costs less and you can configure larger storage capacities. An Intel 300GB Flash SSD costs about $550 (<$2.00/GB) while DRAM cost about $9.00/GB (8GB RDIMM), about 5X cheaper. It is a classic trade off of speed vs. capacity in the storage hierarchy.

In the hierarchy, Flash comes in above of HDD, but below DRAM. And DRAM is below SRAM cache (L3/L2), or CPU cache (L1/L2). Therefore, it makes perfect sense to have DRAM buffer or cache Flash, and Flash to provide the first tier of non-volatile storage, with mechanical HDDs the final non-volatile tier. So even in an array with a large compliment of Flash drives, it still makes sense from a performance, wear, and price perspective to buffer the Flash tier with DRAM.

Posted in Storage Posts | Comments Off

Good Things Happen in Strange Ways – The Story of Sun Oracle M-Class Server Memory Upgrades

-written by Cecile Dagoret, Dataram Memory Blog Team Member – EMEA

When discussing with server managers at large corporations I realize that a criterion of great importance to them is how scalable the server platform that they select is, and will be, over time.

I have heard them praise the Sun Oracle M-Class Enterprise servers platform –M4000/M5000 and M8000/M9000 – and it seems to me that these systems are to servers what Formula 1 is to cars! But, I hear them also say that as powerful and superior as these servers might be, in their opinion they are not matching their expectations in terms of scalability. The cost of the components—memory amongst others—being far too high compared to the initial cost of the system, is a problem.

As I speak with IT decision makers, many are valuing creative alternatives to make the most out of their initial choice of a server platform.

The 8GB DIMMs that are used in the 64GB memory kits for the M4000/M5000, and in the 128GB memory kits for the M8000/M9000, were recently the subject of an End-of-Life Notification by Oracle.  At first glance, this notification may appear to cause great anxiety to the owners of those server platforms.  Yet, this notification is actually good for Oracle, and what’s even more important, it’s good for the customers using this platform!

Many customers have not yet taken advantage of the substantial price performance benefits associated with Dataram’s memory products. They will likely become aware of them now during their journey to identify a company with a long history of providing quality memory for this platform (namely Dataram).  Once again, proving that good things happen in strange ways!

 

Posted in Memory Posts | Comments Off

Pick the right Xeon to get the best memory and application performance!

-written by Paul Henke, Dataram Memory Blog Team Member

DDR3-1333 is today’s mainstream memory speed and is the fastest memory supported in today’s best selling x86 2-way servers with Intel processors.  However, many users are purchasing these servers and selecting Xeon processors that do not support this top speed grade of memory.  Unfortunately, they are not aware their CPU selection will result in DDR3-1333 memory being clocked down to DDR3-1066 or an even the lowest memory speed, DDR3-800.

To make matters worse, some users are assuming the memory is not working correctly!  After all, the DIMMs are marked as PC3-10600 (DDR3-1333).  They conclude the memory must be the issue because a slower speed is being displayed upon booting up.  To run at maximum speed, these DIMMs must be mated to a Xeon CPU that has the fastest QPI (Quick Path Interconnect) speed of 6.4 GT/s.   In Intel‘s model number scheme, you must select a model number of 5650 or higher (up to 5698) to run at -1333.  CPUs of model #5640 or lower, have a slower QPI speed, and cannot run DIMMs at DDR3-1333 memory speed.

Since DRAM manufacturers are currently yielding die that almost all are testing out at DDR3-1333 speed or higher, there is no price premium for selecting fast DDR3-1333 DIMMs.   At Dataram we encourage users to select the faster memory, since there is no additional cost in doing so.  If you have existing slower Xeon CPUs that results in slower memory operation, perhaps a CPU upgrade might be in your future, and you will already be set with the faster memory needed!   Or you may need additional new  servers and migrate your previously purchased Dataram DDR3-1333 DIMMs into one that does support this speed.  In either case, memory is power, and the single best additive to make your applications run faster!  Dataram has you covered with a complete line of server memory.

Posted in Memory Posts | Comments Off

Surface Mount Technology – The Future is Now

-written by Guy Corsey, Dataram Memory Blog Team Member

Remember when Dick Tracy would speak to his Police chief through his wrist watch? I always thought of how cool it would to be able to do that.

Well, with the creation of Surface Mount Technology, what I thought was so cool has become reality. Surface Mount Technology (SMT) had its beginnings in the 1960s, largely used by NASA in guidance systems of unmanned vehicles initially, but on a very small scale. During that time through-hole technology was still being used as the main manufacturing technology in electronic manufacturing. Through-hole technology was robust, reliable and people were comfortable with it because it had been around for some time—since the beginning of the printed circuit board. On the other hand however, it was big, bulky and in many cases, very heavy.

So, in “the 60s” IBM began to use the same components from the through-hole era and put them on to printed circuit boards that had no holes for components to go through. They bent the leads of the components to be able to sit flat on the surface of the printed circuit boards. That’s where the name Surface Mount was derived from—components sitting on the surface. The experimental stage went on for some time. Developing components that would reduce cost and make the items smaller and lighter was the focus of engineers. The old reliable through-hole technology held its ground during the 1970s and into the early 1980s. In the background, SMT was gaining recognition. It was being used more and more in commercial electronics by the mid 1980s. The component manufacturers were able to make components more reliable. More types of components had been developed to take the place of the old through-hole components and the Integrated Circuit had come a long way since its development in the 1950s.

Today there is a mass number of different SMT components. Some are so small you can barely see them with the naked eye. With the growth of SMT equipment placing these components, they can be placed with extreme accuracy, speed and repeatability.  The components can be picked up by the equipment, measured and inspected and placed on a PCB in less then a tenth of a second, allowing hundreds of thousands of components to be placed in an hour.

Rarely is through-hole technology being used in today’s commercial electronics. The thirst for commercial electronics by consumers contently being on the rise with mobile media, home owner appliances, television and most any other type of electronic device. Surface Mount Technology will continue to grow for years to come. With Surface Mount Technology, the future is now.

Posted in Memory Posts | Comments Off