Sort of similar to the Great Filter theory, but applied to time travel technology.

  • cynar@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    It seems to allow it, in a sense. The errors are also left on the transmission end. By transmitting them normally, the 2 signals can be combined to recreate the data. Something is shared, at some point.

    It’s definitely a “we’re not sure what’s actually going on” type situation though. Either both ends are drawing on some (otherwise) hidden data layer, or FTL transmission is allowed, so long as no information flows (information as defined by quantum mechanics). It just turns out that weird entanglement based systems are the only ones (we’ve found so far) able to send infomationless transmissions.

    Both solutions would give deeper insights into reality, and its underpinnings. Unfortunately, we’ve not actually teased out which is happening.

    My gut feeling is that the speed of light is a side effect of a fixed/stable causality across all rest frames. Hidden information seems to be a lot more cumbersome.

    • SmoothOperator@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      I’m not sure what you mean. If something is “shared”, but this something contains no information, how can we know that it was shared? In what sense does this something even exist?

      The perfect correlation of entangled particles is well established, and very cool, but perfect correlation does not require sharing of “something”. The perfect correlation is baked into the system from the start, from local interactions only.

        • bunchberry@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          That’s actually not quite accurate, although that is how it is commonly interpreted. The reason it is not accurate is because Bell’s theorem simply doesn’t show there is no hidden variables and indeed even Bell himself states very clearly what the theorem proves in the conclusion of his paper.

          In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.[1]

          In other words, you can have hidden variables, but those hidden variables would not be Lorentz invariant. What is Lorentz invariance? Well, to be “invariant” basically means to be absolute, that is to say, unchanging based on reference frame. The term Lorentz here refers to Lorentz transformations under Minkowski space, i.e. the four-dimensional spacetime described by special relativity.

          This implies you can actually have hidden variables under one of two conditions:

          1. Those hidden variables are invariant under some other framework that is not special relativity, basically meaning the signals would have to travel faster than light and thus would contradict special relativity and you would need to replace it with some other framework.
          2. Those hidden variables are variant. That would mean they do indeed change based on reference frame. This would allow local hidden variable theories and thus even allow for current quantum mechanics to be interpreted as a statistical theory in a more classical sense as it even evades the PBR theorem.[2]

          The first view is unpopular because special relativity is the basis of quantum field theory, and thus contradicting it would contradict with one of our best theories of nature. There has been some fringe research into figuring out ways to reformulate special relativity to make it compatible with invariant hidden variables,[3] but given quantum mechanics has been around for over a century and nobody has figured this out, I wouldn’t get your hopes up.

          The second view is unpopular because it can be shown to violate a more subtle intuition we all tend to have, but is taken for granted so much I’m not sure if there’s even a name for it. The intuition is that not only should there be no mathematical contradictions within a single given reference frame so that an observer will never see the laws of physics break down, but that there should additionally be no contradictions when all possible reference frames are considered simultaneously.

          It is not physically possible to observe all reference frames simulatenously, and thus one can argue that such an assumption should be abandoned because it is metaphysical and not something you can ever observe in practice.[4] Note that inconsistency between all reference frames considered simulatenously does not mean observers will disagree over the facts, because if one observer asks another for information about a measurement result, they are still acquiring information about that result from their reference frame, just indirectly, and thus they would never run into a disagreement in practice.

          However, people still tend to find it too intuitive to abandon this notion of simultaneous consistency, so it remains unpopular and most physicists choose to just interpret quantum mechanics as if there are no hidden variables at all. #1 you can argue is enforced by the evidence, but #2 is more of a philosophical position, so ultimately the view that there are no hidden variables is not “proven” but proven if you accept certain philosophical assumptions.

          There is actually a second way to restore local hidden variables which I did not go into detail here which is superdeterminism. Superdeterminism basically argues that if you did just have a theory which describes how particles behave now but a more holistic theory that includes the entire initial state of the universe going back to the Big Bang and tracing out how all particles evolved to the state they are now, you can place restrictions on how that system would develop that would such that it would always reproduce the correlations we see even with hidden variables that is indeed Lorentz invariant.

          Although, the obvious problem is that it would never actually be possible to have such a theory, we cannot know the complete initial configuration of all particles in the universe, and so it’s not obvious how you would derive the correlations between particles beforehand. You would instead have to just assume they “know” how to be correlated already, which makes them equivalent to nonlocal hidden variable theories, and thus it is not entirely clear how they could be made Lorentz invariant. Not sure if anyone’s ever put forward a complete model in this framework either, same issue with nonlocal hidden variable theories.