Search the Community
Showing results for tags 'ssd'.
It's not hard to get a capacious solid-state drive if you're running a server farm, but everyday users still have to be picky more often than not: either you get a roomy-but-slow spinning hard drive or give up that capacity in the name of a speedy SSD. Samsung may have finally delivered a no-compromise option, however. It's introducing a 4TB version of the 850 Evo that, in many cases, could easily replace a reasonably large hard drive. While it's not the absolute fastest option (the SATA drive is capped at 540MB/s sequential reads and 520MB/s writes), it beats having to resort to a secondary hard drive just to make space for your Steam game library. Of course, there's a catch: the price. The 4TB 850 Evo will set you back a whopping $1,500 in the US, so it's largely reserved for pros and well-heeled enthusiasts who refuse to settle for rotating storage. Suddenly, the $700 2TB model seems like a bargain. Even if the 4TB version is priced into the stratosphere, though, it's a good sign that SSDs are turning a corner in terms of viability. It might not be long before high-capacity SSDs are inexpensive enough that you won't have to make any major sacrifices to put one in your PC. Source: EnGadget View the full article
Samsung Electronics, the world leader in advanced memory technology, announced today that it has begun mass producing the industry’s first NVMe* PCIe solid state drive (SSD) in a single ball grid array (BGA) package, for use in next-generation PCs and ultra-slim notebook PCs. The new BGA NVMe SSD, named PM971-NVMe, features an extremely compact package that contains all essential SSD components including NAND flash memory, DRAM and controller while delivering outstanding performance. “Samsung’s new BGA NVMe SSD triples the performance of a typical SATA SSD, in the smallest form factor available, with storage capacity reaching up to 512GB,” said Jung-bae Lee, senior vice president, Memory Product Planning & Application Engineering Team, Samsung Electronics. “The introduction of this small-scale SSD will help global PC companies to make timely launches of slimmer, more stylish computing devices, while offering consumers a more satisfactory computing environment.” Configuring the PM971-NVMe SSD in a single BGA package was enabled by combining 16 of Samsung’s 48-layer 256-gigabit (Gb) V-NAND flash chips, one 20-nanometer 4Gb LPDDR4 mobile DRAM chip and a high-performance Samsung controller. The new SSD is 20mm x 16mm x 1.5mm and weighs only about one gram (an American dime by comparison weighs 2.3 grams). The single-package SSD’s volume is approximately a hundredth of a 2.5” SSD or HDD, and its surface area is about a fifth of an M.2 SSD, allowing much more design flexibility for computing device manufacturers. In addition, the PM971-NVMe SSD delivers a level of performance that easily surpasses the speed limit of a SATA 6Gb/s interface. It enables sequential read and write speeds of up to 1,500MB/s (megabytes per second) and 900MB/s respectively, when TurboWrite** technology is used. The performance figures can be directly compared to transferring a 5GB-equivalent, Full-HD movie in about 3 seconds or downloading it in about 6 seconds. It also boasts random read and write IOPS (input output operations per second) of up to 190K and 150K respectively, to easily handle high-speed operations. A hard drive, by contrast, will only process up to 120 IOPS in random reads, making the new Samsung SSD more than 1500 times faster than an HDD in this regard. The PM971-NVMe SSD line-up will be available in 512GB, 256GB and 128GB storage options. Samsung will start providing the new SSDs to its customers this month worldwide. As a leading SSD provider, Samsung has a history of introducing advanced SSDs ahead of the industry. In June 2013, Samsung introduced XP941 SSD in M.2 (mini PCI-Express 2.0) form factor (80mm x 22mm), which was also the industry’s first PCIe SSD for PCs. Now, Samsung plans to rapidly expand its market base in the next-generation premium notebook PC sector with the new high-performance, BGA package, NVMe SSD. Later this year, Samsung plans to introduce more high-capacity and ultra-fast NVMe SSDs to meet increasing customer needs for improved performance and greater density. * Often shortened as NVMe, NVM Express (Non-Volatile Memory Express) is an optimized, high performance, scalable host controller interface with a streamlined register interface and command set designed for enterprise, datacenter and client systems that use non-volatile memory storage. For more information, please visit www.nvmexpress.org ** TurboWrite is a Samsung proprietary technology that temporarily uses certain portions of an SSD as a write buffer. TurboWrite delivers better PC experiences as users can enjoy much faster sequential write speeds. Source: Samsung View the full article
There has been a LOT of confusion around Windows, SSDs (hard drives), and whether or not they are getting automatically defragmented by automatic maintenance tasks in Windows. There's a general rule of thumb or statement that "defragging an SSD is always a bad idea." I think we can agree we've all heard this before. We've all been told that SSDs don't last forever and when they die, they just poof and die. SSDs can only handle a finite number of writes before things start going bad. This is of course true of regular spinning rust hard drives, but the conventional wisdom around SSDs is to avoid writes that are perceived as unnecessary. I've seen statements around the web like this: I just noticed that the defragsvc is hammering the internal disk on my machine. To my understanding defrag provides no value add on an SSD and so is disabled by default when the installer determines the disk is SSD. I was thinking it could be TRIM working, but I thought that was internal to the SSD and so the OS wouldnâ€™t even see the IO. One of the most popular blog posts on the topic of defrag and SSDs under Windows is by Vadim Sterkin. Vadim's analysis has a lot going on. He can see that defrag is doing something, but it's not clear why, how, or for how long. What's the real story? Something is clearly running, but what is it doing and why? I made some inquiries internally, got what I thought was a definitive answer and waded in with a comment. However, my comment, while declarative, was wrong. Windows doesnâ€™t defrag SSDs. Full stop. If it reports as an SSD it doesnâ€™t get defraged, no matter what. This is just a no-op message. Thereâ€™s no bug here, sorry. - Me in the Past I dug deeper and talked to developers on the Windows storage team and this post is written in conjunction with them to answer the question, once and for all "What's the deal with SSDs, Windows and Defrag, and more importantly, is Windows doing the RIGHT THING?" It turns out that the answer is more nuanced than just yes or no, as is common with technical questions. The short answer is, yes, Windows does sometimes defragment SSDs, yes, it's important to intelligently and appropriately defrag SSDs, and yes, Windows is smart about how it treats your SSD. The long answer is this. Actually Scott and Vadim are both wrong. Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. Itâ€™s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata canâ€™t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance. As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped. Wow, that's awesome and dense. Let's tease it apart a little. When he says volume snapshots or "volsnap" he means the Volume Shadow Copy system in Windows. This is used and enabled by Windows System Restore when it takes a snapshot of your system and saves it so you can rollback to a previous system state. I used this just yesterday when I install a bad driver. A bit of advanced info here - Defrag will only run on your SSD if volsnap is turned on, and volsnap is turned on by System Restore as one needs the other. You could turn off System Restore if you want, but that turns off a pretty important safety net for Windows. One developer added this comment, which I think is right on. I think the major misconception is that most people have a very outdated model of diskfile layout, and how SSDs work. First, yes, your SSD will get intelligently defragmented once a month. Fragmentation, while less of a performance problem on SSDs vs traditional hard drives is still a problem. SSDS *do* get fragmented. It's also worth pointing out that what we (old-timers) think about as "defrag.exe" as a UI is really "optimize your storage" now. It was defrag in the past and now it's a larger disk health automated system. > Additionally, there is a maximum level of fragmentation that the file system can handle. Fragmentation has long been considered as primarily a performance issue with traditional hard drives. When a disk gets fragmented, a singular file can exist in pieces in different locations on a physical drive. That physical drive then needs to seek around collecting pieces of the file and that takes extra time. This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed. SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective. TRIM is a way for SSDs to mark data blocks as being not in use. Writing to empty blocks on an SSD is faster that writing to blocks in use as those need to be erased before writing to them again. SSDs internally work very differently from traditional hard drives and don't usually know what sectors are in use and what is free space. Deleting something means marking it as not in use. TRIM lets the operating system notify the SSD that a page is no longer in use and this hint gives the SSD more information which results in fewer writes, and theoretically longer operating life. In the old days, you would sometimes be told by power users to run this at the command line to see if TRIM was enabled for your SSD. A zero result indicates it is. fsutil behavior query DisableDeleteNotify However, this stuff is handled by Windows today in 2014, and you can trust that it's "doing the right thing." Windows 7, along with 8 and 8.1 come with appropriate and intelligent defaults and you don't need to change them for optimal disk performance. This is also true with Server SKUs like Windows Server 2008R2 and later. Conclusion No, Windows is not foolishly or blindly running a defrag on your SSD every night, and no, Windows defrag isn't shortening the life of your SSD unnecessarily. Modern SSDs don't work the same way that we are used to with traditional hard drives. Yes, your SSD's file system sometimes needs a kind of defragmentation and that's handled by Windows, monthly by default, when appropriate. The intent is to maximize performance and a long life. If you disable defragmentation completely, you are taking a risk that your filesystem metadata could reach maximum fragmentation and get you potentially in trouble. Source: Hanselman View the full article
A Japanese research team developed a technology to drastically improve the writing speed, power efficiency and cycling capability (product life) of a storage device based on NAND flash memory (SSD). The team is led by Ken Takeuchi, professor at the Department of Electrical, Electronic and Communication Engineering, Faculty of Science and Engineering of Chuo University. The development was announced at 2014 IEEE International Memory Workshop (IMW), an international academic conference on semiconductor memory technologies, which took place from May 18 to 21, 2014, in Taipei. The title of the thesis is "NAND Flash Aware Data Management System for High-Speed SSDs by Garbage Collection Overhead Suppression." With NAND flash memory, it is not possible to overwrite data on the same memory area, making it necessary to write data on a different area and, then, invalidate the old area. As a result, data is fragmented, increasing invalid area and decreasing storage capacity. Therefore, NAND flash memories carry out "garbage collection," which rearranges fragmented data in a continuous way and erases blocks of invalid area. This process takes 100ms or longer, drastically decreasing the writing speed of SSD. In September 2013, to address this issue, the research team developed a method to prevent data fragmentation by making improvements to middleware that controls a storage for database applications. It makes (1) the "SE (storage engine)" middleware, which assigns logical addresses when an application software accesses a storage device, and (2) the FTL (flash translation layer) middleware, which converts logical addresses into physical addresses on the side of the SSD controller, work in conjunction. This time, the team developed a more versatile method that can be used for a wider variety of applications. The new method forms a middleware layer called "LBA (logical block address) scrambler" between the file system (OS) and FTL. The LBA scrambler works in conjunction with the FTL and converts the logical addresses of data being written to reduce the effect of fragmentation. Specifically, instead of writing data on a new blank page, data is written on a fragmented page located in the block to be erased next. As a result, the ratio of invalid pages in the block to be erased increases, reducing the number of valid pages that need to be copied to another area at the time of garbage collection. In a simulation, the research team confirmed that the new technology improves the writing speed of SSD by up to 300% and reduces power consumption by up to 60% and the number of write/erase cycles by up to 55%, increasing product life. Because, with the new method, it is not necessary to make any changes to NAND flash memory, and the method is completed within the middleware, it can be applied to existing SSDs as it is. Source: Nikkei Technology View the full article