Thanks for the reply.
I looked into qubes a little bit, and your right. its very very similar to the approach that I had imagined in my mind. However, in there case, I still think the qubes are "infectable"....??... maybe.. its seems to me like its more of a sandbox approach, that could still allow malware to infect the sandbox, while not allowing that to reach out to other systems. same end result, different implementation. It just appears to me that they are only implementing current technology, in a segregated way. Im proposing a completely new layer of security, that does not exist yet.
As I have thought about this more and more, I have really just run myself in circles pointing out my limited knowledge of computer programming and os development. And I think I have added a bit more to my idea to make it more doable/reasonable
Let me back up a bit.
suppose you have your programs on an encrypted, read only drive. when you run a program, the OS would allocate memory and run a "ghost" of that program, fully loaded into ram, and cordoned off from any changes or code injections by malware. The program would have a unique "trust key" that would allow other sub programs enter its ram environment. IF malware did get onto the system, it would never have that trust key and would not be able to make changes to the ghost program... and it certainly would not be able to make changes to the original program locked into the read only drive space. (you could use the same trust system for updating the software on that drive)
take for example a cad software I use often like solidworks. Solidworks.exe is more so an "environment" to run lots and lots of sub programs, than it is a singular program for doing CAD work. In the particular case of malware, if it got onto your system now, it could almost easily inject code into one of those sub programs, and when solidworks runs it, now you have a critical security vulnerability. (ignore how it got on the pc in the first place)
by my method, The "solidworks.exe" program has 2 defenses. Primarily, is the code authentication portion that verifies all of its programs and sub programs have not been modified as they sit on the read only storage space. I assume this would have to take place with some sort of hash check or signature. Secondly, as the program runs, and a users calls up a sub program, solidworks itself (and this is important) NOT the os, would verify the trust key of that sub program before it is allowed to run in its environment. This would basically be an always on "trust but verify" situation.
so now you have like 4 or 5 layers of defense with minimal overhead. ( we have plenty of cpu and ram resources to handle this on a modern workstation, imo)
Now where I get a bit lost, is how the OS would play into all this, in a safe, and controlled way, such that it doesn't leak the trust keys, or somehow spoof an improper result. as an example, explorer.exe is a common attack point. if a trusted program calls up an explorer process, then that process should also get a trust key to identify where it came from, where it goes, (and possibly an encryption layer), you would have to ensure that malware isnt sitting, waiting, trying to intercept one of those keys, thereby enabling its own explorer.exe process...however, if the malware can never get installed onto the known good read only storage media, then it would not be able to pass a hash check, and would be nullified. thereby never being able to steal the trust key in the first place. I might add, that if malware DID somehow manage to intercept a key, and started doing its own thing... then the program would not get the information it needed and therefore would submit a request for the exact same process... if this happens repeatedly, the antivirus should immediately be able to flag it.
i know.. its complicated.. there is multiple layers and angles from which to attack malware from and im probably not doing that good a job of explaining whats in my head.