Daniel Earwicker Chief Software Architect
FISCAL Technologies Ltd

How Does Auth work?


SAAS FEDERATION 2018-11-24

Abstract: Authentication is figuring out who someone is, and authorization is concerned with what they are allowed to do (or any other useful information about them). The basic approach is straightforward, but it becomes more useful and interesting when you consider many separate services that all need to collectively accept requests from the same users.

The more gnarly aspects of this are solved problems - or at least, are part of an on-going arms race against the Bad Guys - so it would be foolish to reimplement something so critically important. You really should reuse an existing implementation. But the best way to be sure you are using something correctly is to get the gist of how it works.

The password problem

We're all familiar with the process of being authenticated: whenever you enter your username and password you are being authenticated. There are other fancy ways, like looking at your phone's camera, but they all boil down to the same thing: supplying some data that only you can supply. We can call it your password, whether or not it's something you type in.

This works well as long as you are the only person who knows your password, so your ability to produce it on demand proves that you are you. But you don't want to be sharing it willy-nilly; the more you do that, the more chances there are for someone to steal it. Nor would it be convenient to remember a different password for every service.

Things get even worse when you consider that services are often layered, by making use of other services. Your private medical data are stored in one service, and you want to allow another service to access those data for processing in some way. It seems you have to give your password to this processing service, but that doesn't limit its ability to act on your behalf - you'll have given away complete control over your identity.

Authentication as a service

What you need is a single authentication service, A, which is the only thing you have to trust, and therefore the only thing to which you ever give your password. When you try to access some other service, S, and it wants to know who you are, it redirects you to A (providing the address of S so that A knows where to redirect back to later).

A asks you for your username and password. As a security-savvy user, you can check your address bar and see that you're talking to trustworthy A, so you go ahead happily. A checks that your credentials are legit, and redirects back to S, also passing some token, which is a chunk of information that S can use to find out who you are.

Implementing A (briefly: don't)

We'll come back to the token in a moment. First it's worth reiterating the importance of not implementing your own version of A. You already have enough security crap to worry about without becoming responsible for telling all your users when hackers have stolen their passwords. In short, it is not okay for A to store a table of usernames and corresponding plaintext passwords. The password must be cryptographically hashed, and the resulting gibberish stored instead.

But even so-called secure hashes can be brute-force cracked. To slow this process down, rather than just hashing the password, it should first be combined with some extra random data called salt, which must be unique to every user (and regenerated anew whenever they change their password). This stops the use of pre-computed tables of hash results. Another technique is to deliberately use a really slow (computationally intensive) hashing algorithm - that is, slow enough to make it impractical to use brute force cracking, but not so slow that you can't authenticate a user with acceptable latency.

The crackers keep getting faster hardware, so an A service has to keep getting better at slowing them down. It's a full time job, so it makes sense for the rest of us to stand on the shoulders of giants.

So what's a token?

We noted earlier that from the perspective of a service S, the role of A is to be a black box to which a user can be redirected, so that a little while later they will return to S with something called a token. This somehow lets S know who the user is. All subsequent requests sent from the client (say, the user's browser) will include the token as the only thing that indicates who is trying to use S.

The token is proof that a certain user has recently provided their password to A because they want to use S. And so it has a time limit on its validity. When that expires, another token must be obtained.

Ideally it would not be possible to forge a token. Otherwise a hostile client could act on behalf of a user without them needing to provide their password. But of course given that valid tokens are just patterns of information, they can be constructed by anyone. The best we can do is ensure that it is terrifically unlikely that a valid token could be constructed without the user's password.

Given a token, S now needs a way to validate the token, to check that it is genuine. It also needs a way to get things like a username, or an email address, or even an answer to a question such as "So are they allowed to delete everything?" from a given token. These facts are known as claims.

Scopes

Note that the line between authentication and authorization is being blurred here. It's not as fundamental a distinction as it might seem - more a matter of interpretation. We could adopt a silly rule such as:

"Anyone whose first name begins with a vowel is allowed to launch missiles"

and so merely by knowing the user's first name we have enough information to authorize them. Or a service could use the user's email address as the key into its own database of authorization rules.

But that would only solve the problem of authorizing a user to do things with one service. Now consider a layered situation, like the example of your medical data managed in one service, and another service that you want to be able to access some of your medical data. A total of four parties are now involved:

This is where it becomes helpful for the service A to become both an authentication and (by design, not merely potential interpretation) an authorization service. Rather than authorizing the user to do something, the user is authorizing T to access S on the user's behalf.

When T redirects to A, it can also specify a set of required scopes previously defined by S, which might more helpfully be described as capabilities. In the example of your medical data, the possible scopes might include the ability to read the history of your heart rate, or your blood pressure, or your current medication, and T may only be interested in the heart rate data. Before generating a token, A will ask the user "Do you want to allow T to read your heart rate?" If the answer is yes it would generate a token that enabled that capability only. T could then use that token when contacting the medical data service S. And S would actually check the scopes enabled in the token before allowing certain operations to proceed.

So if you're just looking for a solution to the problem of authorizing users directly accessing your service, you could be forgiven for wondering what the point of scopes is. Supposing you are the author of S, the repository that owns the data, and you've created a UI for accessing it so the user can manage their data directly. One wrinkle of using a scope-based authorization service is that the first time your user accesses your service via your UI, they will be asked:

Do you want to allow Acme UI the following permissions?

  • Read blood pressure
  • Read current medication
  • Change current medication

It will seem quite jarring for your users to be asked if they want to allow this to happen - of course they do. It's a baffling question to a user who has just started trying to use a service directly. As far as they are concerned, your UI and your service are one and the same thing, and it makes no sense as a user to be asked if you trust one to access the other.

But think about how your service's data might be useful to another service. This is when scopes become a relevant concept: they are a way to give your users control over how third party services can use your service on their behalf.

Note that from the perspective of T, both A and S are black boxes, and the contents or structure of tokens is entirely irrelevant. T wants to access S on behalf of the user, and A is simply a magic genie that generates a token that makes this possible.

But let's dig a little deeper into the ultimate case where at last there's only one service involved, and it actually cares about what the token signifies: is it valid, and what claims (facts about the user) does it yield?

Opaque tokens

The first kind of A I remember integrating with was CAS 1.0, a popular Java solution. It used the word ticket instead of token, but these seem to be roughly interchangeable terms: a value that represents something. Has anyone used baton?

The ticket was opaque, meaning that it was impossible to parse it. It was essentially just a long string of random data. So a service S would have to go back to A directly to request the actual useful information associated with the ticket.

The obvious downside of this is the extra cost of a round-trip from S to A for every request handled by S. Consider that there may be a large number of services S1, S2Sn all hitting A every time they are used, and it becomes clear that A will be a potential bottleneck. So maybe every S ends up maintaining its own cache of information obtained from the ticket.

I said potential bottleneck, because it is worth noting that a service like A could be made so highly available that it adds no significant burden to all Sn. For token validation purposes, A is a hash table in which each key is a token mapped to a set of facts about a user. It could be implemented by a Redis cluster with certain commands disabled (any such as KEYS * that allow keys to be listed). When a user is authenticated, a random string is generated to be their token, and stored as a key in Redis, with the value being (say) a JSON object of useful facts about the user. The key would be set to expire automatically. And any S would be able to validate a token and learn about the user in a single step by simply trying to GET its value from Redis. This would effectively be a centralized form of the same kind of caching that each S would otherwise have to implement.

But this is speculation - we're not going to be implementing A, right?

Transparent tokens

There is an alternative to opaque tokens, hinted at by the name. Any sort of additional round-trip to a central cache incurs some cost, however much we are able to minimize it, so it would be good to eliminate it altogether.

A transparent token is directly parseable: it contains the very information the service needs, such as the user's name and their authorized capabilities.

But this introduces another problem: if the token is a bundle of information, and is sent by the client, then what's to stop a hostile client from generating a token itself that says "Yup, this guy is totally allowed to do everything"?

As usual there is no such thing as a free lunch, and here we pay with a different kind of complexity. A transparent token must contain two parts: the payload of useful information and a signature.

Digital signatures

If you're familiar with asymmetric encryption by RSA, where a sender can encrypt a message with the recipient's public key and so only the recipient can decrypt it with their private key, you almost have a way to implement signatures.

First, the RSA function is actually symmetrical, in the sense that you can reverse the roles of the private and public keys and it still has the same secure capabilities. So if A were to hash the payload (to make it small enough to fit in a single block) and then pass it to the RSA function with A's private key, the result would be a signature. The token can contain the payload and the signature. A recipient, S, can validate the token by calling the RSA function with the signature and A's public key, to get back the hashed payload. It then only has to hash the payload itself and check that it is the same as the decrypted version.

So to be able to validate tokens, S just needs A's public key, and it can do the whole thing without needing to contact any central service every time.

OAuth 2.0 and JWT

There is sufficient complexity in this idea that any home-made concoction is likely to be riddled with bugs that could entirely undermine its security. So once again, while it might be fun to build your own, it would be madness to rely on it.

One popular kind of transparent token is a JWT, which consists of some base64 strings separated by . characters. One of the strings is the payload, another is the signature.

In OAuth 2.0 HTTPS requests include a header:

Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

It is of course vitally important that such requests use HTTPS, not plaintext HTTP, because anyone with a valid token can act as the user who generated it.

Example: GitHub

Take a look at the Github documentation for accessing their API. They are the ultimate repository S. They define scopes such as repo, notifications, gist. You are going to write a third-party service T, so you can access GitHub on behalf of a user already known to GitHub.

(You'll notice that authorization requires two steps: the redirect to A, and then an HTTPS call from your service to an endpoint in A that lets you exchange a temporary code for a genuine access token. Some providers support implicit grants, which avoid this extra call, but Github doesn't.)

Time reversible events 2023-04-07
Language Smackdown: Java vs. C# 2023-03-07
Domesday '86 Reloaded (Reloaded) 2021-02-07
The Blob Lottery 2020-09-27
Abstraction is a Thing 2020-03-07
Unfortunate Bifurcations 2019-11-24
Two Cheers for SQL 2019-08-26
Factory Injection in C# 2019-07-02
Hangfire - A Tale of Several Queues 2019-05-24
How Does Auth work? 2018-11-24
From Ember to React, Part 2: Baby, Bathwater, Routing, etc. 2018-03-18
From Ember to React, Part 1: Why Not Ember? 2017-11-07
json-mobx - Like React, but for Data (Part 2) 2017-02-15
Redux in Pieces 2017-01-28
Box 'em! - Property references for TypeScript 2017-01-11
TypeScript - What's up with this? 2017-01-01
MobX - Like React, but for Data 2016-12-28
Eventless - XAML Flavoured 2016-12-24
Immuto - Epilogue 2016-12-20
Immuto - Radical Unification 2016-09-22
Immuto - Working with React (An Example) 2016-09-16
Immuto - Strongly Typed Redux Composition 2016-09-11
TypeScript - What is a class? 2016-09-11
TypeScript and runtime typing - EPISODE II 2016-09-10
TypeScript and runtime typing 2016-09-04
What's good about Redux 2016-07-24
TypeScript multicast functions 2016-03-13
Introducing doop 2016-03-08
TypeScript is not really a superset of JavaScript and that is a Good Thing 2015-07-11
A new kind of managed lvalue pointer 2014-04-27
Using pointer syntax as a shorthand for IEnumerable 2014-04-26
Adding crazily powerful operator overloading to C# 6 2014-04-23
Introducing Carota 2013-11-04
Want to comment on anything? Create an issue!