How Do SSDs Work?

Here at ExtremeTech, weve often discussed the difference between different types of NAND structures vertical NAND versus planar, or multi-level cell (MLC) versustriple-level cells(TLC) and quad-level cells (QLC). Now, lets talk about the more basic relevant question: How do SSDs work in the first place, and how do they compare with newer technologies, like Intels non-volatile storage technology,Optane?

To understand how and why SSDsSEEAMAZON_ET_135 See Amazon ET commerce are different from spinning discs, we need to talk a little bit about hard drives. A hard drive stores data on a series of spinning magnetic disks called platters. Theres an actuator arm with read/write heads attached to it. This arm positions the read-write heads over the correct area of the drive to read or write information.

Because the drive heads must align over an area of the disk in order to read or write data (and the disk is constantly spinning), theres a non-zero wait time before data can be accessed. The drive may need to read from multiple locations in order to launch a program or load a file, which means it may have to wait for the platters to spin into the proper position multiple times before it can complete the command. If a drive is asleep or in a low-power state, it can take several seconds more for the disk to spin up to full power and begin operating.

From the very beginning, it was clearthat hard drives couldnt possibly match the speeds at which CPUs could operate. Latency in HDDs is measured in milliseconds, compared with nanoseconds for your typical CPU. One millisecond is 1,000,000 nanoseconds, and it typically takes a hard drive 10-15 milliseconds to find data on the drive and begin reading it. The hard drive industry introduced smaller platters, on-disk memory caches, and faster spindle speeds to counteract this trend, but theres only so fast drives can spin. Western Digitals 10,000 RPM VelociRaptor family is the fastest set of drives ever built for the consumer market, while some enterprise drives spun up to 15,000 RPM. The problem is, even the fastest spinning drive with the largest caches and smallest platters are still achingly slow as far as your CPU is concerned.

How SSDs Are Different

If I had asked people what they wanted, they would have said faster horses. Henry Ford

Solid-state drives are called that specifically because they dont rely on moving parts or spinning disks. Instead, data is saved to a pool of NAND flash. NAND itself is made up of what are called floating gate transistors. Unlike the transistor designs used in DRAM, which must be refreshed multiple times per second, NAND flash is designed to retain its charge state even when not powered up. This makes NAND a type of non-volatile memory.

Flash cell structure

Image by Cyferz at Wikipedia, Creative Commons Attribution-Share Alike 3.0.

The diagram above shows a simple flash cell design. Electrons are stored in the floating gate, which then reads as charged 0 or not-charged 1. Yes, in NAND flash, a 0 means data is stored in a cell its the opposite of how we typically think of a zero or one. NAND flash is organized in a grid. The entire grid layout is referred to as a block, while the individual rows that make up the grid are called a page. Common page sizes are 2K, 4K, 8K, or 16K, with 128 to 256 pages per block. Block size therefore typically varies between 256KB and 4MB.

One advantage of this system should be immediately obvious. Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. The following chart shows the access latency for typical storage mediums given in microseconds.

SSD-Latency

Image by CodeCapsule

NAND is nowhere near as fast as main memory, but its multiple orders of magnitude faster than a hard drive. While write latencies are significantly slower for NAND flash than read latencies, they still outstrip traditional spinning media.

There are two things to notice in the above chart. First, note how adding more bits per cell of NAND has a significant impact on the memorys performance. Its worse for writes as opposed to reads typical triple-level-cell (TLC) latency is 4x worse compared with single-level cell (SLC) NAND for reads, but 6x worse for writes. Erase latencies are also significantly impacted. The impact isnt proportional, either TLC NAND is nearly twice as slow as MLC NAND, despite holding just 50% more data (three bits per cell, instead of two). This is also true for QLC drives, which store even more bits at varying voltage levels within the same cell.

The reason TLC NAND is slower than MLC or SLC has to do with how data moves in and out of the NAND cell. With SLC NAND, the controller only needs to know if the bit is a 0 or a 1. With MLC NAND, the cell may have four values 00, 01, 10, or 11. With TLC NAND, the cell can have eight values, and QLC has 16. Reading the proper value out of the cell requires the memory controller to use a precise voltage to ascertain whether any particular cell is charged.

Reads, Writes, and Erasure

One of the functional limitations of SSDs is while they can read and write data very quickly to an empty drive, overwriting data is much slower. This is because while SSDs read data at the page level (meaning from individual rows within the NAND memory grid) and can write at the page level, assuming surrounding cells are empty, they can only erase data at the block level. This is because the act of erasing NAND flash requires a high amount of voltage. While you can theoretically erase NAND at the page level, the amount of voltage required stresses the individual cells around the cells that are being re-written. Erasing data at the block level helps mitigate this problem.

The only way for an SSD to update an existing page is to copy the contents of the entire block into memory, erase the block, and then write the contents of the old block + the updated page. If the drive is full and there are no empty pages available, the SSD must first scan for blocks that are marked for deletion but that havent been deleted yet, erase them, and then write the data to the now-erased page. This is why SSDs can become slower as they age a mostly-empty drive is full of blocks that can be written immediately, a mostly-full drive is more likely to be forced through the entire program/erase sequence.

If youve used SSDs, youve likely heard of something called garbage collection. Garbage collection is a background process that allows a drive to mitigate the performance impact of the program/erase cycle by performing certain tasks in the background. The following image steps through the garbage collection process.

Garbage collection

Image courtesy of Wikipedia

Note in this example, the drive has taken advantage of the fact that it can write very quickly to empty pages by writing new values for the first four blocks (A-D). Its also written two new blocks, E and H. Blocks A-D are now marked as stale, meaning they contain information the drive has marked as out-of-date. During an idle period, the SSD will move the fresh pages over to a new block, erase the old block, and mark it as free space. This means the next time the SSD needs to perform a write, it can write directly to the now-empty Block X, rather than performing the program/erase cycle.

The next concept I want to discuss is TRIM. When you delete a file from Windows on a typical hard drive, the file isnt deleted immediately. Instead, the operating system tells the hard drive it can overwrite the physical area of the disk where that data was stored the next time it needs to perform a write. This is why its possible to undelete files (and why deleting files in Windows doesnt typically clear much physical disk space until you empty the recycling bin). With a traditional HDD, the OS doesnt need to pay attention to where data is being written or what the relative state of the blocks or pages is. With an SSD, this matters.

The TRIM command allows the operating system to tell the SSD it can skip rewriting certain data the next time it performs a block erase. This lowers the total amount of data the drive writes and increases SSD longevity. Both reads and writes damage NAND flash, but writes do far more damage than reads. Fortunately, block-level longevity has not proven to be an issue in modern NAND flash. More data on SSD longevity, courtesy of the Tech Report, can be found here.

The last two concepts we want to talk about are wear leveling and write amplification. Because SSDs write data to pages but erase data in blocks, the amount of data being written to the drive is always larger than the actual update. If you make a change to a 4KB file, for example, the entire block that 4K file sits within must be updated and rewritten. Depending on the number of pages per block and the size of the pages, you might end up writing 4MB worth of data to update a 4KB file. Garbage collection reduces the impact of write amplification, as does the TRIM command. Keeping a significant chunk of the drive free and/or manufacturer over-provisioning can also reduce the impact of write amplification.

Wear leveling refers to the practice of ensuring certain NAND blocks arent written and erased more often than others. While wear leveling increases a drives life expectancy and endurance by writing to the NAND equally, it can actually increase write amplification. In other to distribute writes evenly across the disk, its sometimes necessary to program and erase blocks even though their contents havent actually changed. A good wear leveling algorithm seeks to balance these impacts.

The SSD Controller

It should be obvious by now SSDs require much more sophisticated control mechanisms than hard drives do. Thats not to diss magnetic media I actually think HDDs deserve more respect than they are given. The mechanical challenges involved in balancing multiple read-write heads nanometers above platters that spin at 5,400 to 10,000 RPM are nothing to sneeze at. The fact that HDDs perform this challenge while pioneering new methods of recording to magnetic media and eventually wind up selling drives at 3-5 cents per gigabyte is simply incredible.

SSD controller

A typical SSD controller

SSD controllers, however, are in a class by themselves. They often have a DDR3 or DDR4 memory pool to help with managing the NAND itself. Many drives also incorporate single-level cell caches that act as buffers, increasing drive performance by dedicating fast NAND to read/write cycles. Because the NAND flash in an SSD is typically connected to the controller through a series of parallel memory channels, you can think of the drive controller as performing some of the same load-balancing work as a high-end storage array SSDs dont deploy RAID internally but wear leveling, garbage collection, and SLC cache management all have parallels in the big iron world.

Some drives also use data compression algorithms to reduce the total number of writes and improve the drives lifespan. The SSD controller handles error correction, and the algorithms that control for single-bit errors have become increasingly complex as time has passed.

Unfortunately, we cant go into too much detail on SSD controllers because companies lock down their various secret sauces. Much of NAND flashs performance is determined by the underlying controller, and companies arent willing to lift the lid too far on how they do what they do, lest they hand a competitor an advantage.

Interfaces

In the beginning, SSDs used SATA ports, just like hard drives. In recent years, weve seen a shift to M.2 drives very thin drives, several inches long, that slot directly into the motherboard (or, in a few cases, into a mounting bracket on a PCIe riser card. A Samsung 970 EVO Plus drive is shown below.


NVMe drives offer higher performance than traditional SATA drivers because they support a faster interface. Conventional SSDs attached via SATA top out at ~550MB/s in terms of practical read/write speeds. M.2 drives are capable of substantially faster performance into the 3.2GB/s range.

The Road Ahead

NAND flash offers an enormous improvement over hard drives, but it isnt without its own drawbacks and challenges. Drive capacities and price-per-gigabyte are expected to continue to rise and fall respectively, but theres little chance SSDs will catch hard drives in price-per-gigabyte. Shrinking process nodes are a significant challenge for NAND flash while most hardware improves as the node shrinks, NAND becomes more fragile.Data retention times and write performance are intrinsically lower for 20nm NAND than 40nm NAND, even if data density and total capacity are vastly improved. Thus far, weve seen drives with up to 96 layers in-market, and 128 layers seems plausible at this point. Overall, the shift to 3D NAND has helped improve density without shrinking process nodes or relying on planar scaling.

Thus far, SSD manufacturers have delivered better performance by offering faster data standards, more bandwidth, and more channels per controller plus the use of SLC caches we mentioned earlier. Nonetheless, in the long run, its assumed NAND will be replaced by something else.

What that something else will look like is still open for debate. Both magnetic RAM and phase change memory have presented themselves as candidates, though both technologies are still in early stages and must overcome significant challenges to actually compete as a replacement to NAND. Whether consumers would notice the difference is an open question. If youve upgraded from NAND to an SSD and then upgraded to a faster SSD, youre likely aware the gap between HDDs and SSDs is much larger than the SSD-to-SSD gap, even when upgrading from a relatively modest drive. Improving access times from milliseconds to microseconds matters a great deal, but improving them from microseconds to nanoseconds might fall below what humans can really perceive in most cases.

Intels 3D XPoint (marketed as Intel Optane) has emerged as one potential challenger to NAND flash, and the only current alternative technology in mainstream production.Optane SSDs dont use NAND theyre built using non-volatile memory believed to be implemented similarly to phase-change RAM but they offer similar sequential performance to current NAND flash drives, but with vastly better performance at low drive queues. Drive latency is also roughly half of NAND flash (10 microseconds, versus 20) and vastly higher endurance (30 full drive-writes per day, compared with 10 full drive writes per day for a high-end Intel SSD).

Optane1

Intel Optane performance targets

The first Optane SSDs have debuted as excellent add-ons for Kaby Lake and Coffee Lake.SEEAMAZON_ET_135 See Amazon ET commerceOptane is also available in standalone drives, and in a variety of server roles for enterprise computing. At the moment, Optane is still too expensive to match NAND flash, which benefits from substantial economies of scale, but this could change in the future. NAND will stay king of the hill for at least the next 3-4 years. But past that point, we could see Optane starting to replace it in volume, depending on how Intel and Micron scale the technology and how well 3D NAND flash continues to expand its cell layers (96-layer NAND is shipping from multiple players), with roadmaps for 128 layers on the horizon.

Check out our ExtremeTech Explains series for more in-depth coverage of todays hottest tech topics.

Now read: