Passwords are bad. We've known passwords are bad for decades, but the truth is that they're unlikely to go away for a very long time, even though we know all about their flaws.
It's our own fault. Humans are pretty bad at remembering random information, so we make a point to choose passwords that are memorable to us. "Memorable", generally, means "predictable". Any review of passwords chosen across any reasonably-sized user base reveals the same thing over and over again:
- Users choose their passwords according to predictable patterns based on their environment, their interests, and their own lives.
- Users share passwords between different accounts because it's easier for them to remember what their passwords are that way.
- Users are predictable in the way they change their passwords when they rotate them (for example, by incrementing a number at the end)
We've spent decades trying to tell the world about how to handle their passwords securely:
- Choose long passwords based on phrases rather than single words
- Use password management tools to ensure that you have unique credentials for every account
- Make sure you don't share credentials between standard accounts and privileged accounts
- Make sure you use a lot of complexity in your passwords
Still, the use of stolen credentials is one of the most common ways adversaries gain access to systems to cause a data breach - which suggests something isn't quite working.
Passwordless authentication is generally orders of magnitude better than using passwords. Compromising a resource which is using exclusively passwordless mechanisms is much more complicated because the adversary typically needs to compromise a physical device in the control of the target user in order to do this. There are still problems with most implementations, though, and these tend to crop up when considering fallback positions and edge cases. One of the key features of passwords which means they will likely never disappear entirely is their universal applicability.
In pretty much every application you can imagine, it's reasonable (and easy) to ask a user to provide you with a piece of secret information that they know in order to verify their identity (even if that secret information is only 'secret' within a certain degree of certainty). You don't need to manage any devices, you don't need to provide any support except for a way to reset access and the process is so familiar that people don't need training in it anymore. The same isn't necessarily true of other mechanisms for authenticating users in a passwordless world. Authentication factors are typically one of three things: Something you know (like a password), something you have (like a physical device such as a token or a smartphone) or something you are (biometrics - like your fingerprint, your face or your DNA).
Many implementations of passwordless schemes rely on a user having an enrolled device within an environment, with authentication either tied directly into the device (e.g. device trust models for access to corporate resources), or with a user verifying when accessing resources that they are in possession of that device (for example, by accepting a notification on their smartphone). Some implementations use hardware tokens which must be presented, or mechanisms whereby a user needs to demonstrate proximity to a sensor to authenticate.
These can be a great way to authenticate people - especially in a corporate environment - but there are still costs associated. Hardware tokens are physical devices that need to be managed and that users need to track. What happens when a user forgets their token, or loses it? What happens if it's stolen? When passwordless schemes are implemented there are often fallback authentication mechanisms so that users can regain access to resources if they don't have access to their authenticated device, or their token goes astray. In many cases, these fallback mechanisms circle right back round to secret information which the user knows - for example, answering secret questions in order to re-enroll access. An adversary may not be able to directly spray passwords in the same way against these fallback mechanisms, but they do share many of the same weaknesses as any other 'password' solution (predictability, use of shared information, etc.) which needs to be considered at the time of implementation.
We're used to two-factor authentication via smart phone authenticator applications these days, but if you're implementing this type of 2FA in a corporate environment this requires that a business has a way to manage the devices on which these authenticator applications are installed. You may need to purchase and manage the mobile phones, or you may need to install MAM software on your employee's devices - which they might not be happy about. For public-access website accounts aimed at consumers, inclusion becomes a problem necessitating an accessible fallback. Not every user has a smartphone, and even within the user population with smartphones, not every user is willing or able to navigate using authenticator applications in order to access online services. Fallbacks such as 'SMS to a mobile' are popular - but you need a mobile phone and phone signal to use this. Falling back to password authentication and letting people choose 2FA for their accounts if they want to tends to allow everybody to participate, so is popular for implementation.
Biometrics (something you are, like your face, or your fingerprint) are another common means of verifying identity, but tolerance and accessibility can be problematic. In order to provide a balance between security and usability, biometric mechanisms will have a margin for error. Because, for example, a finger might not always be precisely in the same position, algorithms have to decide whether a fingerprint is "close enough" to the enrolled pattern - this leaves room for false positives. False negatives - for example, where a scan of the correct finger is rejected - can be frustrating for the user and the less tolerance there is for false negatives within a target user group, the more likely it is that false positives will be accepted. If you plan to implement biometrics, you need to plan for exceptions to your scheme for accessibility purposes too. Not all users may meet the criteria to enrol within your biometric authentication programs. You need to account for other authentication mechanisms where, for example, an individual might not have fingerprints suitable for enrolment, or might not be able to facilitate an iris scan.
If you are running a public-access solution (like an e-commerce site) and want your customers to be able to access it from anywhere, at any time, without worrying about device compatibility - passwords are an attractive option. Not a secure option, but certainly one that enables easy access for users and which users are accustomed to.
To provide an alternative to passwords which works universally is difficult, and there's a trade off between absolute security and the practicalities of ensuring that everybody can access the resources they need when they need them. Passwords remain, regrettably, an easy answer to complex questions of universal accessibility and inclusion and this means they will likely be with us for a very long time in some form or another, along with all of their inherent weaknesses. So we'll continue to talk about password hygiene, how to choose a good password and how to manage passwords securely for much longer than we want to - because sometimes it's the least worst answer we can come up with.