Recently an article was published by ISE detailing security vulnerabilities in most password managers, specifically, the ability for an attacker with access to a computer's memory to read sensitive information contained in the password vault, if the password manager is running. This spewed a heated debate over at 1Password . I commented there twice, but I think most people are not looking at the birds eye view here. In the end I believe it all boils down to the following:
From a security perspective, consider the use of a computer as comparable to a house. The computer is congruent to a house.
Assume our house has a high fence around it with locked gates. The fence is similar to a firewall. It prevents most people from gaining casual entry to the house. It is layer 1 in the security sandwich.
A house has locks on all the main entrances. These locks prevent a casual adversary from gaining entrance after they scaled the fence; they do not stop a professional. This is layer 2 of our security model.
One usually has a safe in the house too. Let us assume we have a good TL-30 rated safe, then it will require 30 minutes for a professional with the right equipment and skilled knowledge to break into it. So we store our innermost secrets in the safe, which constitutes layer 3 of our security model.
For someone to get to our innermost secrets, they need to scale the fence, pick the lock of the house and then pick the safe. Each layer acts as deterrent, either discouraging an attacker or slowing her down. Each one on its own is weak, but combined they are much stronger. Only a determined adversary with fence climbing skills, lock picking skills as well as safe cracking skills will get in. That is a much smaller number of the population that could have gained access if we had no fence, no locks on our doors and no safe, with our innermost secrets stored on the kitchen countertop.
So - defense in depth is a good thing.
Our computer is usually protected by a firewall - the fence. The computer itself needs a password to access the operating system, this is the house lock. Take note that there is no difference in security (nor should there be) between a computer that was booted up from power off state to the login screen, and a computer that was logged in but is now locked (at least, that is my current understanding). Only the password will grant you access.
Once unlocked, our innermost secrets are stored in a password manager as we have too many passwords we need to remember, so we store them in a database. The password manager encrypts this database akin to the safe's strong metal body and lock. In our computing scenario however, it is (currently) impossible to pick the lock or break the walls of the safe - the encryption is very strong. So we have defense in depth - an adversary has to breach the firewall, somehow gain access to the locked computer and then crack the password database. A tall order indeed.
Things are not that simple though, as this is where the analogy ends. In our real life analogy it is assumed that you are most at risk for burglary when you are not at home, as I think most burglars will wait until you are gone to simplify conflict (if the intent is theft of your secrets). This works because picking a lock is not as hard as going face to face with an angry home owner with a baseball bat. It is much more difficult to physically disguise yourself in someone else's home than it is to disguise your digital footprint when you are remotely accessing someone's computer. This is because people have eyes that can only see in the visible spectrum. We can see someone in our house that does not belong - it is an evolutionary trait we evolved over millenia. Computer communications are invisible, it is bits of 1's and 0's flowing as electromagnetic waves through the air and copper wires connecting us to each other. Most ordinary computer users are not skilled at detecting digital traces.
It is much harder to break the password (assuming strong passwords are used) of a operating system account or password database than it is to wait until the user has unlocked that door for you by logging in to their PC and potentially running some content the attacker sent over, such as a malicious web site link or software application. So social engineering while the user is on his PC is the go to method - at least, it is a good method, usually better than to try and brute force the computer remotely.
Thus the different threat response and detect models between the house analogy and the computer scenario dictates that you are more at risk sitting in front of your computer being logged in, doing work, than you are when the computer is turned off and you are away from it. An unpowered computer is unhackable if there is no physical access to it.
This is crucial in order to understand why software need to be made robust against a different set of threats than the house analogy. Defense in depth certainly carries forward, but the threat model is different. Software are pretty good at defending information when the computer is at rest, or powered down. One can take the hard drive out if you have physical access to it, but encryption makes it almost impossible to recover anything useful. So we have that threat covered (mostly).
The biggest threats now are those that are aimed at us when we work on our computers, being logged in to the operating system and running our various applications. This is akin to protecting our innermost secrets when we have a house party and lots of people walking around the house. We trust most of those people (software applications), but since we cannot see into the depths of every soul, we cannot be sure they are all good. A bad actor can piggy back as one of the guests and therefore gain access to your house, without you knowing. All that prevents the secrets in your safe from being stolen is the fact that the safe is very, very heavy (you cannot pick it up and take it away), and it has a very good lock on it.
So we have software that are good and bad potentially on our computers. We cannot know for sure. Software have the same rights as we do on our computers. So what prevents some bad software from accessing your password vault? This is where the discussion falls apart, I feel.
1Password (and even Troy Hunt) feels that if your computer is already compromised by malicious software, then it is game over and nothing 1Password can do will protect you. I am in the opposite camp - I feel that because the biggest threat model we face is exactly that - malicious software that runs on our computers with the same privileges as we have, we need to change our way of thinking in terms of defense in depth. We are not going to stop social engineering from working because humans are gullible. So malware will continue to work - anti virus cannot keep up with it.
Now technically speaking they are right. If malicious software is installed on your PC, that software can listen to your keystrokes, read system memory, access unlocked files - basically, it can wait for something interesting to happen such as you typing in your master password to unlock your password database, and send that over to the adversary. But I feel that is like saying since a bad actor can gain access to your house during a party, you can leave your safe open as it does not matter - he has the ability to access it. But the analogy breaks down here as it is all about window of exposure. For the bad actor in your house - the window of exposure is small - the length of the party. If he is still there after the party, you will notice. So a good safe will help you. Malware however have much greater window of exposure. They can potentially lie undetected for years. So securing your password database does not help as much - eventually you will enter your master password and the malware will get it. At least, this is what 1Password and other security experts are trying to say. And they are right.
What I feel we should be doing is similar to the house and safe analogy - we need to find a way to still limit access to certain things on a running computer with a malicious actor present. 1Password has a lock vault option after a certain timeout. This feature is horribly broken though. The idea behind it is to reduce the time your vault is unlocked and vulnerable from prying eyes, but it does not work for software that reads system memory, only for visual inspection via the monitor. The idea is to reduce the window of exposure. But it does so only for one kind of attack. The researchers found that even after locking the vault, while 1Password is still running, your master password and all other secrets are still stored in system memory, unencrypted, available for malware to read.
So what now? There is what is technically hard to do, and then there is what we should be doing. They keep on presenting the fact that this is hard to do, as being congruent to it cannot be done. So how do we protect against this? Remember, no system is perfectly safe. We cannot (yet) build a perfectly safe system. But we can make it hard enough for someone that they will be deterred.
Here are some thoughts:
Together this will mean that another process cannot get to the secrets decrypted in RAM. This will make 1Password and others much, much more resilient against attacks.
Now I know both of these are probably OS level issues; this article is not trying to put blame on anyone, it is merely there to highlight the actual problem, and a way to fix it.
If anyone can tell me why those two solutions are not technically possible, please feel free to email me. And do keep in mind I am well aware that hardware bugs like Spectre and Meltdown etc. can potentially break any OS protections - for now, let us focus on what we can address.
Update 2019-02-27 Obviously these are just starting recommendations. To apply a more comprehensive security strategy the OS should also:
Other input devices need to be protected similarly. Applications that read the screen should also request permission before being allowed to.