I had a mail from a friend who asked some pertinent questions which I thought I’d share.

Ease of use and accessibility primarily.  Internally the system boils down to binding an address with a UUID and a public key.  Whilst coming up with a one off registration for a user-name is a simple enough proposition as all the system does is speak in UUIDs, what you do with that key?  How do you “claim” that identity on multiple devices.  You rapidly get down to using a password or password variant.  I’ve had my mobile phone number for 15 years and it’s survived many devices and computers.  The other thing about it, OTR is about anonymity, this isn’t.  Many people would say it’s something they like but it does make their life harder.  Again, same with user id’s, how do you find people? You have to either ship your name around or bind it with an email address to be found.  None of these are an insurmountable problem and one of the things I’ve been thinking about is how you bind something like a phone number with say an email address if we’re less convinced about security around email addresses?  

  •  What networks will it connect to? I’m reluctant to connect to yet one more network. I already have Jabber, Skype, AIM, Google,, Twitter and Facebook.

It’s a fair question, at the moment, ours, as we need to create the tunnel, but notionally the channel messages (the inner encrypted bundle which is just passed through, which we can’t read) could be sent over anything so long as the endpoint knew how to deal with them.  That’s an interesting thought actually. Also,, is an interesting idea, I wonder if we could provision the service via them.

  • Do I really have to do yet one more merge of all my contacts with some new service?

No, it’s automatically built of the devices address book, another reason phones are nice, if we supported a published directory of people, I don’t see why grabbing your facebook list of contacts wouldn’t be straight forward enough and cross referencing that with our directory.

  • Will it be compatible with OTR on the other major networks? If not, I don’t see the compelling driver. If yes, what distinguishes it from the others?

The demonstrable difference is the level of privacy it provides combined with the ease of use.  OTR is clever stuff but it’s not the sort of thing that, your friends (well, your friends might being security geeks :)), family and colleagues (hmm, maybe this isn’t such a good tag line).  The other thing is the openness of the protocol, it’s ability to be examined by people like you.

  • Does it run on its own dedicated servers and such or just piggyback on some other network like AIM? What kind of reliability can I expect?

Some of it has to, directories, keystores, authentication components and SMS gateways  but certainly it could be piggybacked on other networks.  Reliability wise, the processes themselves are dirt simple.  They rely on Amazon (today, insert cloud provider here) to build highly scalable infrastructure components, we’re certainly not going to that ourselves, but we are good at building systems that use highly scalable infrastructure.  That is something we know a lot about.

  • What PKI will it use? PGP web of trust and key servers? X.509? I’d like to reuse identities that I already have that are already widely published. I’m not that keen on having yet another identity in yet another PKI.

It’s a hybrid, checkout here, here and here.  No X.509 Certificates (although we use X.509 for keys, which is really just an ASN.1 representation of a public key) and sign them with their associated UUID and address for simplicities sake.  As for identity, it’s not a new identity (it’s a mobile number), but yes, it does live in our PKI.


Being unpopular

In the last post I talked a bit about some of the attack scenarios that people might use to get at the system.  In essence what we’ve got is a PKI system, where we, centrally assert that a users public key is is authentic and that you can trust us.

But, what if you couldn’t?  If I take a flight, from London to Japan, I can stop over in any number of countries.  Some of whom don’t have a particularly fantastic track record when it comes to respecting peoples rights, not just the right to privacy but things like the right not to be tortured.  Some people building similar protocols have spoken about how they’d rather go to prison than let the system be compromised.  Thing is that’s the wrong sort of mindset to have.  If I land in an unfriendly country, it’s not if I have to collude in breaking the system, it’s when.  If someone wants to start chopping off digits I’ll do just about anything they want me to.  Even if you’re the sort of person who’ll die to protect peoples privacy, what about your sysadmins, do they have children or developers or someone else who works on the system.  Are they all SERE trained ninjas?  Because I’m not.


No the trick has go to be that the system detects attempts by people to break it in such a way as to be so risky as to attempt and not be the weakest link in the chain.

So let’s go back to the system.  Alice and Bob use the system.  They’ve been through the authentication process and come up gleaming, sat in our directory is Alice1 and Bob1, their public keys.  As Alice and Bob know each other they have each others public keys in their address books.   Now, I’m holidaying in Palawan, Southern Philipines and I’m abducted by Katy the Kidnaper and Tim the Torturer.  Tim has become interested in Alice and Bob and wants to tap their communications.  He’s got me tied to a chair with electrodes attached and I’m in full compliance mode.

Now Tim and Kate can’t ask me to decrypt their chats, I don’t have the keys.  I can block the chats but they’ll likely work out something is wrong, either way I can’t read the communications. But Tim is cunning (and you have to absolutely bank on the fact that the world is rammed with cunning people), he proposes to generate new key pairs and replace them in the directory with his own, AliceEvil1.  Tim in turn captures the negotiation from Bob, pretends to be Alice, completes the negotiation and forwards the message on to Alice.  Problem is Bob signs this negotiation message and as soon as Alice gets it, she checks the signature and as it fails verification (because Tim can’t pretend to be Bob too) she knows something is up and gives passing thought to a programmer in a dungeon somewhere.  But Tim is a cunning fox, and realizes if this is to work he has to be pretend to be Bob, to Alice and Alice to Bob.  He goes and replaces Bob’s public key too with  BobEvil1.

We now have AliceEvil1 and BobEvil1 in the directory.  We’ll also assume that neither Bob or Alice will think that this change is strange (in the protocol we’ll warn them).  The next thing that will happen is when others refresh their address books they’ll get told that Alice or Bob have changed their keys.  As this isn’t a hugely common event they’ll be asked if they want to check them, with Alice and Bob directly (bypassing the system and sending them a text message or phoning them).  Better still as many people share contacts it’s entirely possible that Mike, friends with both Alice & Bob will get pinged about the change that’s happened at the same time.  As will anyone else.  So publishing duff keys to everyone is risky and Tim will likely get caught.  But what! I hear you say about the text messages, if they’re willing to kidnap me they’re willing to kidnap someone who can manipulate text messages too.  It’s a fair point.  It’s why the real time nature of calling someone is useful (and one of the reasons phones are good for this application).  Ok, but what about blocking calls, yes, but Tim would have to do that for all calls because he doesn’t know which one is from a concerned friend, which gets riskier still.  Replace call with tweet, facebook update, whatever, a significant amount of noise to warn someone that there is a duff key in the directory.

Back to Tim. Tim really really really wants to read these messages so he’s come up with an alternative to publishing AliceEvil1 and BobEvil1 in the directory.  So how’s about getting me (nipple clamps, car battery) to just publish AliceEvil1 and BobEvil1 to only Bob and Alice in turn.  Clever bloke this Tim.  Much reduced footprint, much less likely to get caught, he likes this idea.  So now he’s setup as the perfect man in the middle.  He can pretend to be Alice to Bob and vice versa and nobody else knows. He also controls all of the messages in and out of both Alice and Bob’s phone and all the phone calls.   So, somehow Alice needs to verify that Bob’s public key is Bob’s, and she can’t be definition ask Bob.  So she asks Mike.  She wants to ask Mike, if Bob’s public key is legit.  If Mike gets the message then he checks his phone book and the directory to see what’s there and sends a message back to Alice.  But hang on, can’t Tim be a man-in-the-middle.  Yes, but he’s also got to now pretend to be Alice to Bob, Bob to Alice and Mike to Alice too.  This means publishing MikeEvil1 to Alice and seeing as how Alice has already just had Bob change his public key she’s totally suspect now.   Better yet if we trend towards using the oldest keys offset by the most active users on the system in your address book we’ll use a well established public key in your address book (which would need replacing) and a prompt response (’cause Mike’s a chat fiend).  If we then throw in a random selection from that subset then Tim can’t even know who he’s supposed to impersonate ahead of time (to make the key substitution beforehand, which also means he needs to act as the go between for all of Alice & Mike’s messages too).

Lastly, Tim controls all the messages in and out.  Alice can know she’s asked Mike and if she get’s a response, Tim can block this message but Alice will get suspicious (we should tell her to be suspicious and try phoning Mike and Bob).  Better yet if Alice picks someone she also chats too, she’d potentially notice that all the messages had stopped but this is probably minor.

ARGH!  Tim being cunning doesn’t go kidnapping people lightly and having worked all this out decides that Katy just be better to kidnap Alice.

This is what I mean about being unpopular.  The system is setup to, for 99% of users, to be entirely trustworthy but in the case where it might not be, to be so risky to interfere with as to not be the way to get at peoples messages without being discovered.

…or Katy or Tim

When Mars Attacks!


When designing any system like Talaria, it’s very important to think about the types of ways that it might be attacked.  As we’ve explained in broad strokes here and here, Talaria uses a number of different things in concert to prevent those attacks.

The first scenario is to think about is someone who doesn’t already use Talaria at all.  As part of the protocol we send an encrypted SMS message to a user to make sure that they have control of the phone number they say they do.  The first scenario is that, that gets blocked and never arrives.  The thing to note about this attack is that whilst it prevents someone using the system, it doesn’t mean that your chat can be read by an unscrupulous actor.  At the very least this can be detected and you know there is something wrong.

The next scenario is if someone who has access to the telephone infrastructure can pretend to be you.  This is a difficult problem as if you’ve never registered to use the system they could in the first instance they can pretend to be you.  This attack is limited to someone who can manipulate the telephone infrastructure and where the detection of that sort of attack relies upon the person that the attacker is talking to (pretending to be you) realizes that they are not talking to their real friend.  This is a risky strategy and is likely to be detected when you talk to your friend.  One of the things the Talaria application does it that at whilst this negotiation is going on it will ask if you want to call the person you’re chatting to to make sure that they are in fact using Talaria.  Hey Jim, says here you’ve just started using Talaria, is that true?

The next scenario is where someone can intercept the text message and return the verifier.  This is of no use to someone as the request has to be signed and sent using the same secret key that only resides on your phone.  We reject messages which aren’t signed by you.   So someone can’t fail the verification process on your behalf.

The other scenario is that Katy the Kidnapper gets us to change our software so that everything we send between Alice & Bob to them too or that we leak the secret key for the channel.   To combat that we give the tools away that you using your keys can decrypt all their messages.  That would be quickly found out and that’s very risky.

But what if someone compelled us to help them.  They’ve kidnapped our family and we have no choice but to help them.  The first attack we can think of like this is where they get our private key from us.  That means they can now impersonate us, capturing all your traffic and sending it to their system.  They can forge the public keys of your friends and publish new ones.  Thing is your friends will get warned about this when they get given new keys and can choose to accept you, or call you to ask what’s happened.  But what happens if they publish fake keys for both you and your friend.  Let’s say Alice wants to talk to Bob and they both use the system to publish their public keys so they can sort out channels.  Except when they look each other up they get a fake key from Harry the Hacker & Katy the Kidnapper.  They both negotiate a secret channel with Harry and Harry just switches out the communications on both ends and forwards them on.  This is a tricky problem.  You might be wondering why we don’t just publish the man in the middle attackers fake key, the risk with that is the more people involved in the process the more likely it is that they’ll get found out.  Either way we’ve got a man in the middle attack here. What they can’t do in this scenario is change the key that you have on your phone.  So the way to deal with that is for you to somehow find out their real key on their phone.  You generated that, and for whatever reason they can’t risk hacking into your phone.   We’ll discuss the distributed key check protocol in the next  post but in short, you can enable the app to ask others you chat with to check the keys of other people you talk to, automatically and on demand.  That way Harry Hacker & Katy Kidnapper have to start impersonating everyone in your network and acting as a go between for all the communications in and out of your phone and because they need to be a man in the middle for all of your channels it increases the risk that they’ll get caught doing it.  This is what the system does automatically, if you have an NFC device simply touch your device to that of a friend using Talaria and they’ll check all your keys for you to make sure nobody is impersonating them to you.

These are just some of the attacks we’ve thought through (that are allot of ways and there are allot of very intelligent and capable people out there who’d do it), we’ll be talking a bit more about how we guard against these things in future posts.

Keeping it private – part 2


So in the last post we described the setup of the app and how it starts off on its journey to build a private channel.  To quickly recap, the phone now has a public/private keypair, a secret key that you’ve exchanged with the system and the system has got your public key and knows it’s from you, which gets stored in our directory ready to give it to anybody else who asks.

The next step is to build a private, encrypted channel between you and your friend.  The first step is to get their public key, so the app asks us, what the registered public key is for your friend[1].  We give that to you using the private channel between you and the system and we sign that response, so you know the public key comes from us not to mention coming back over our shared secret channel[2].

You then generate a channel name (a friendly name, usually the persons name), channel id and a new secret key, the app bundles this up in much the same way as it did when it started to talk to the server.  The bundle of data, including your public key is encrypted using your newly generated secret key and then your secret key gets encrypted with your friends public key.  You also sign this entire packet.  The phone then sends that to us.  We do a few things to just double check that this message is from you.  Firstly it has to come over our shared secret channel which we know is from you and secondly the digital signature on the channel message (which we’ll forward on) has to be signed by you[3]. If that’s ok, we send it on to your friend.

When your friend gets your message, the first thing they do, is get your public key from the server.  They then check the signature on the negotiation message to make sure that it really has come from you.  If it isn’t, the throw it away.  They then decrypt the first part of the message using their private key.  In this part is the secret key that you are going to use to chat with each other.  They then send an acknowledgement message to you, encrypting that message with their secret key, signing it with their private key[4] accepting the channel.

So what have you got now?  You have now exchanged a secret key with your friend that we the system haven’t seen, in a way that makes your sure that you’ve exchanged that key with who we say it is.  You can now chat away, with us forwarding your messages knowing that nobody in the middle can read your messages or impersonate your friend.

We’ll start to explore how the app detects if we’re messing with the public keys that we’re publishing, in the next post, when Mars attacks.

[1] This is to ensure that you don’t get sent a different public key in the message at face value asserting to be from a specific phone.

[2] Some might wonder what the point of signing the message is if it’s coming from the server over our shared secret channel which we’ve already authenticated.  This is a practical measure for security.  Secret keys must be available to the server processes to wrap and bundle our messages and anything we forward to you from others.  That means if a front line server is compromised somebody could start to impersonate us on that server fairly easily.  Our private key on the other hand is kept well away from the front line machines in a processes not on the same network directly connected to the servers terminating chat connections.  If we can afford it, we’ll use a hardware security module.

[3] This proves that someone hasn’t just managed to obtain your secret key (say from us) and that you have access to your private key still, the other half that we’ve authenticated and hold.

[4] Lastly, to close the loop on the protocol, the far end signs the acknowledgement.  Now notionally this isn’t actually necessary as you’ve encrypted your secret key with their public key so if they’ve decrypted it (and sent you a message back) you can assert they’ve got their private key.  That’s all well and good for single user channels but that isn’t so easy when you want to have multi-person chats with a group of friends.

Additionally: Private key operations are expensive, they take lots of processing power and that in turn eats battery life.  We use them for critical parts of the protocol but not everything all the time.

Keeping it private – part 1


In the first of the series about explaining how Talaria keeps your chat private I wanted to give a very  quick overview about how it works.  It’s fairly technical in detail but if you want to understand any of the building blocks in more detail, leave questions in the comments.  We’ll start with the overview and post the actual messages from the system once we’ve done the high level stuff.

Talaria keeps your messages private by first creating a public/private key pair.  The public half of this, you’re going to give to us, so we can give it to your friends when they want to start a private channel with you.  After you’ve created the key pair you’ll create another key, a secret key, you’re going to share this key with us.  This is used to protect messages sent between you and us the chat server.

You give us a bundle of information, your phone number, your public key and the secret key you generate.  Using our public key (which comes with the app when you download it) you bundle all of this up, number, public key, secret key, id and send it to us.  We now have your public key, a secret key to talk to you with and your phone number.  The next step is important.  We don’t just publish to the world that your public key corrosponds to that phone number.  Next we generate two numbers, encrypt them using your secret key and then send them back to you via text message.  The application then decrypts this text message, takes out the two numbers, adds them together and sends them back to us over the internet.  We verify that response and if we’re happy we record your public key as being linked to that phone number (a phone number that your friends already have in their address book on their phones).

So what’ve we got now? Well, you’ve got a public/private key pair, we’ve got a shared secret key and we have a strongly authenticated binding of your phone number with your public key.  Which is all well and good, now we can exchange messages that nobody can read or impersonate.

A quick note about using SMS.  Some of you will be thinking that SMS is the sort of thing that can be manipulated by unscrupulous actors and that someone who has access to the telecoms infrastructure could impersonate you and receive the verifier and do this whole process pretending to be you.  This is true, the first time, but we’ve got some faith in human nature here and critically we have a number of ways to enable this to be detected, in that people quickly work out that they’re in fact not talking to the person they think they are. We’ll discuss is some detail the possible scenarios how we enable people to detect compromises in the system. Once you’re set up on the system of course any change to the public key that’s been published is detected by both parties.  As part of the startup process the app asks the system, what public key is it publishing for itself and checks to see if those are the same.  It’s also double checked during the login phase which we’ll describe later.  We do also support ‘usernames’ but what happens if you lose your key?  If you don’t back it up somehow or otherwise protect it?  We’ve come up with a way to back up keys on paper or use Mifare cards for those with NFC readers on their phones but that starts to get a bit user un-friendly and we expect the majority of people won’t make good use of that.

Still no private chat channel though?  We’ll cover that in the next post.

What’s it going to look like?

Hey there,

So as some if you have been asking, what it’s going to look like to get an idea of how it’s going to work we thought we’d share some screenshots.  It’s going to be easy to use and work in the way that applications are built for that device.  Once you download it, you’ll immediately know how to use it.  Expect the Windows Phone 8 version to be very different to this one but still guaranteeing the same level of privacy and security.

You can also get an idea of some of the features it has, multi user messaging, blocking, address book integration as well giving those who are interested, control of the inner workings.  If someone isn’t using Talaria, we’ll let you either invite them or give them a discounted version as a gift.

Active channels
Active channels

This shows your active chat  channels, who’s online (if they want to share that) and notifications of pending messages.

Simple chat screen, that you already recognise
Simple chat screen

This is the simple chat screen, we’re still deciding what this is going to look like and we’d like to give people more information about the chat, chat parties etc, sending files.  We’re also working on a one touch scheme to leave a voice message for someone.  Sometimes we find that tone gets lost in simple text messages and you just want to drop them a quick few words in your own voice.  Ever been having a really heated argument and said something you regret?  We’ve come up with a “Time-out” button that stops you (and them) from exchanging messages for 5 minutes.  There are lots of things about Talaria that are going to make you want to use it.

Address Book
Address Book

Your phone’s address book, allows you to easily find people you want to chat to and get them on board with Talaria.

App Settings
App Settings

A key feature of Talaria is to keep you in control.  We expect that many people will just use it to chat and not be too bothered about these.  One of the things that we do like is the idea that messages roll off.  We don’t store messages on the server once we’ve sent them to you and from a privacy perspective, it’s better that after a period of time the messages get deleted.  We find that when we want to refer to previous messages, it’s for getting timings, telephone numbers, places to meet, addresses and so on which is why as well as auto roll off of messages we have an encrypted message clipboard.  Just double tap the message in chat and it’s auto added to a clipboard we keep for you, that way things of importance are always, easily to hand, and you don’t need to go scrolling through hundreds of messages just to find that one email address or telephone number.  Of course if you want to just keep everything, you can turn the feature off.

Also in here, you’ll find things about the encryption keys, again, we expect most people won’t be fussed about this, but it’s all there for you to be able to check, rotate and purge.  This is also the jumping off point for security folks who want to make sure we’re doing what we should be.


If you’re reading this then you’ve arrived at the home of Talaria, the secure private mobile messenger app.  

We’re in the process of putting together some more material here but in the mean time have a think about this.  Modern smartphones allow applications to read their sms messages.  Ever received a verification code via text messages to your phone for your online banking?  Ever stopped to think if there was an application on your phone that was reading that text?