This is not peer reviewed research.
The trust issue is a constant concern in the tech world (SSL certificates, firewalls, authentication/authorization/accounting, blockchain, etc). The problem is that the approaches adopted don’t make it into the public until it’s late for two reasons:
Every once in a while some service comes out that strikes a good balance and brings forth a paradigm shift. Letsencrypt did that for SSL, zero trust did it for internal systems communication, and so on. However there’s always lag in adoption of security measures, and it only takes one malicious actor adopting new technology to blow a hole wide open in “tried and true” security and trust measures.
The alternative would be a non-standard diaper app that, rather than hiding the incoming call, would pick it up and drop it. I don’t know if such software exists.
I assume you meant dialer app 😆 . But anyway, for some Android phones you can use call screening.
Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.
These days I would recommend CrowdSec over fail2ban.