• Paranoid Factoid@beehaw.org
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 year ago

    Well, you’re absolutely right that they’ve released a Mac Pro. Looking it over, the machine is still a terrible deal in comparison to Threadripper. The Mac Pro maxes out at 192GB RAM and 72 GPU cores. Which is a terrible deal compared to Threadripper, which maxes out at 1.5TB RAM and enough PCI lanes for four GPUs.

    From a price / performance standpoint, you could beat this thing with a lower end Ryzen 5950x CPU, 256GB RAM, and two Nvidia 4080 GPUs at maybe $2500-$3000 dollars less than the maxed out Mac Pro.

    But I was wrong there. Thank you for the correction.

    NOTE A 64core Threadripper with 512GB and four 4090 GPUs would be suitable for a professional machine learning tasks. Better GPUs in the Pro space cost much more though. A 5950x 16 core, 256GB, two 4090 GPUs and pci ssd raid would do to edit 8k / 12k raw footage with color grading and compositing in Davinci Resolve or Premiere/Ae. Would be a good Maya workstation for feature or broadcast 3d animation too.

    That Mac Pro would make a good editing workstation in the broadcast / streaming space, especially if you’re using Final Cut and Motion, but is not suitable in the machine learning space. And I wouldn’t choose it for Davinci Resolve as a color grading station. The Mac XDR 6k monitor is not suitable for pro color grading on the feature side, but would probably be acceptable for broadcast / streaming projects. On the flip side, a Pro color grading monitor is $25K to start and strictly PC anyway.

    • Exec@pawb.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      When you’re nearing the terabytes range of RAM you should consider moving your workload to a server anyway.

      • Paranoid Factoid@beehaw.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        It really depends on the kind of and size of data you’re moving. A system bus is a whole lot faster than 10Gb networking. If your data is small and the workload heavy, say a Monte Carlo sim, clusters and cloud make sense. But reverse that, like 12k 14bit raw footage, which makes massive files, and you want that shit local and striped across a couple M.2 drives (or more). Close is fast.

    • monsieur_jean@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      The Apple M series is not ARM based. It’s Apple’s own RISC architecture. They get their performance in part from the proximity of the RAM to the GPU, yes. But not only. Contrary to ARM that has become quite bloated after decades of building upon the same instruction set (and adding new instructions to drive adoption even if that’s contrary to RISC’s philosophy), the M series has started anew with no technological debt. Also Apple controls both the hardware to the software, as well as the languages and frameworks used by third party developers for their platform. They therefore have 100% compatibility between their chips’ instruction set, their system and third party apps. That allows them to make CPUs with excellent efficiency. Not to mention that speculative execution, a big driver of performance nowadays, works better on RISC where all the instructions have the same size.

      You are right that they do not cater to power users who need a LOT of power though. But 95% of the users don’t care, they want long battery life, light and silent devices. Sales of desktop PCs have been falling for more than a decade now, as have the investments made in CISC architectures. People don’t want them anymore. With the growing number of manufacturers announcing their adoption of the new open-source RISC-V architecture I am curious to see what the future of Intel and AMD is. Especially with China pouring billions into building their own silicon supply chain. The next decade is going to be very interesting. :)

      • skarn@discuss.tchncs.de
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        The whole “Apple products are great because they control both software and hardware” always made about as much sense to me as someone claiming “this product is secure because we invented our own secret encryption”.

        • anlumo@feddit.de
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Here’s an example for that: Apple needed to ship an x86_64 emulator for the transition, but that’s slow and thus make the new machines appear much slower than their older Intel-based ones. So, what they did was to come up with their own private instructions that an emulator needs to greatly speed up its task and added them to the chip. Now, most people don’t even know whether they run native or emulated programs, because the difference in performance is so minimal.

      • barsoap@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        1 year ago

        The Apple M series is not ARM based. It’s Apple’s own RISC architecture.

        M1s through M3s run ARMv8-A instructions. They’re ARM chips.

        What you might be thinking of is that Apple has an architectural license, that is, they are allowed to implement their own logic to implement the ARM instruction set, not just permission to etch existing designs into silicon. Qualcomm, NVidia, Samsung, AMD, Intel, all hold such a license. How much use they actually make of that is a different question, e.g. AMD doesn’t currently ship any ARM designs of their own I think and the platform processor that comes in every Ryzen etc. is a single “barely not a microprocessor” (Cortex A5) core straight off ARM’s design shelves, K12 never made it to the market.

        You’re right about the future being RISC-V, though, ARM pretty much fucked themselves with that Qualcomm debacle. Android and android apps by and large don’t care what architecture they run on, RISC-V already pretty much ate the microcontroller market (unless you need backward compatibility for some reason, heck, there’s still new Z80s getting etched) and android devices are a real good spot to grow. Still going to take a hot while before RISC-V appears on the desktop proper, though – performance-wise server loads will be first, and sitting in front of it office thin clients will be first. Maybe, maybe, GPUs. That’d certainly be interesting, the GPU being simply vector cores with a slim insn extension for some specialised functionality.

        • monsieur_jean@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Thanks for the clarification. I wonder if/when Microsoft is going to hop on the RISC train. They did a crap job trying themselves at a ARM version a few years back and gave up. A RISC Surface with a compatible Windows 13 and proper binary translator (like Apple did with Rosetta) would shake the PC market real good!