More concretely, I’m asking this: why aren’t applications compiled fully to native code before distribution rather than bytecode that runs on some virtual machine or runtime environment?
Implementation details aside, fundamentally, an Android application consists of bytecode, static resources, etc. In the Java world, I understand that the main appeal of having the JVM is to allow for enhanced portability and maybe also improved security. I know Android uses ART, but it remains that the applications are composed of processor-independent bytecode that leads to all this complex design to convert it into runnable code in some efficient manner. See: ART optimizing profiles, JIT compilation, JIT/AOT Hybrid Compilation… that’s a lot of work to support this complex design.
Android only officially supports arm64 currently, so why the extra complexity? Is this a vestigial remnant of the past? If so, with the move up in minimum supported versions, I should think Android should be transitioning to a binary distribution model at a natural point where compatibility is breaking. What benefit is being realized from all this runtime complexity?
Thank you for this great and detailed answer!
I would also add that today JVM environments support more languages such as Scala, Kotlin, and Clojure (to name a few). So more variety and more modern paradigms are available.
As for native languages, we are more or less left with C, C++, Go and Rust. Also some of them are really awesome, none seem like a good choice for general-purpose app development.
And a counter-intuitive thing is that modern run times are so well optimized that sometimes they can outperform native applications (I’m not talking about very tight calculations such image processing and AI), because JIT has much more information about both the specific hardware and run time introspection that is unavailable at compile time.