Saturday, April 11, 2015

Retrieving Google account owner and contacts via OAuth2

All the inviter solutions for web apps that I have found on internet only provide the ability to import contacts. But how does the person who is receiving the invitation will know the sender's identity? Of course, if the contact import solution is an integrated module of an application A,  then you can use the user info within that application A to identify the sender.  However, this requires the user to get an account within that application A,  which reduces the range of people who can send non-anonymous invitations to use the app A. So I thought about a modification to fix that concern by allowing the public to send such invites. This is done by retrieving the full name and email of the Gmail/Google user whose contacts are imported via the OAuth2 protocol and thus suppressing the need to create an account within app A to send a non-incognito invites to use app A. The code is a modified version of the 25Labs' sample source.

But first, here is an illustration to quickly show how OAuth2 works.



Knowing the format of the Google API response, I simply store the response in a temporary XML file that I later parse to extract the name and email of the user who is sending the invite.
To reduce the probability of file conflict in the storage of the temporary XML file in case of multiple simultaneous imports, I add a unique prefix that I call "salt" and which I define as the MD5 hash of a unique string ; that string is the concatenation of the current Unix timestamp and IP of the user.

A slight modification for a big change. You can get the full code here. I expect to do the same for the other email services that provide an API.
Hope this helps some developers and engineers out there.

References:
- 25Labs.com
- Developers. google.com

Friday, January 16, 2015

Beware of security questions

Almost all the major companies (insurance, health, banks, hospitals, ...) which hold prime confidential information about us might allow people to steal that information instead of protecting it.
I guess most of you are familiar with their security questions. So here is a small story. Yesterday I was registering for an online service and then came to the stage where I had to provide answers to the security questions. Looking at the questions, I was like, "man, how come you feed me with the same questions set as my bank". As an information security practitioner, I fell the need of second-guessing this security feature and thus, launched a quick investigation.
To do this, I simulated accounts, each in 2 different types of companies X and Y. I won't name them to avoid any reputation impact. Here are the security questions they provided:


With a simple visual observation, one can see that X's questions are about 30% similar to Ys.Therefore, if an attacker knows the answers to access your data in company X, then the probability that s/he can access your data in company Y is 0.3. That means if the questions you choose for X and Y are in the colored similarity zone of the picture above, then you are in a big mess.
We are witnessing here a trade-off between "user responsibility" and "security hardness". Unfortunately, this is a compromise that will most likely show a shift in favor of the user responsibility because most companies don't spend much resources checking for all others companies' security questions before setting up their own; they usually take the most common security questions and tweak them a bit. And we all know that when it comes to computer systems, users like straight and easy requests. So, folks, think twice when choosing your security questions. If picking similar questions for all your accounts is a risk that you are ready to live with, then go for it, otherwise diversity in questions is highly recommended.
Another danger that came to my mind was the answer to the question. One big problem in security is adding more security layers while keeping a good user experience. In the design of the security questions, it looks like companies opt for the user experience by asking simple questions. And of course, the price to pay is low security; meaning that anyone can guess the answer if they know you well enough or use social engineering. And because most of us, users, trust computer systems (especially big fishes), we usually give the right answer to the security question instead of fooling the inference power of a possible attacker by providing a fake answer. Lying to a human being is a bad thing, but lying to a computer system is not. A word to the wise is enough.