Designing security systems like the Google Android

Google’s popular mobile phone, the Android, offers some valuable insight into how to make a hacker  safe network. Android programs are Java applications and operate in a “sandbox” with no access to other applications, hardware or the OS except though tightly controlled interfaces. In order to reach out of the sandbox through an interface to a sensitive resource, a program must be granted permission by the user.

The built-in security design of the system was necessary because Google runs a program for independent software developers (ISV) that aims to create a library of thousands of applications that run on the Android. Therefore Google needed a way to prevent one ISV application from disrupting others when running simultaneously.

Security Architecture

A central design point of the Android security architecture is that no application, by default, has permission to perform any operations that would adversely impact other applications, the operating system, or the user. This includes reading or writing the user’s private data (such as contacts or e-mails), reading or writing another application’s files, performing network access, keeping the device awake, etc.

An application’s process runs in a security sandbox. The sandbox is designed to prevent applications from disrupting each other, except by explicitly declaring the permissions they need for additional capabilities not provided by the basic sandbox. The system handles requests for permissions in various ways, typically by automatically allowing or disallowing based on certificates or by prompting the user. The permissions required by an application are declared statically in that application, so they can be known up-front at install time and will not change after that.

Application Signing

All Android applications (.apk files) must be signed with a certificate whose private key is held by their developer. This certificate identifies the author of the application. The certificate does not need to be signed by a certificate authority: it is perfectly allowable, and typical, for Android applications to use self-signed certificates. The certificate is used only to establish trust relationships between applications, not for wholesale control over whether an application can be installed. The most significant ways that signatures impact security is by determining who can access signature-based permissions and who can share user IDs.

User IDs and File Access

Each Android package (.apk) file installed on the device is given its own unique Linux user ID, creating a sandbox for it and preventing it from touching other applications (or other applications from touching it). This user ID is assigned to it when the application is installed on the device, and remains constant for the duration of its life on that device. Because security enforcement happens at the process level, the code of any two packages cannot normally run in the same process, since they need to run as different Linux users.


As we look over the evolution of the technology industry, we have seen the movement from mainframe computers to desktop computers, to networked computers to netbook computers and now to palm-sized mobile computing devices. And in this era of mobility, users see no  IT people, network usage guidelines or instruction manuals. They just  expect their applications to work and be secure just like their kitchen  fridge – which rarely disappoints the hungry soul. The secure computing platforms of the future, therefore, may look a lot like the Android.

Leave a Reply

Your email address will not be published. Required fields are marked *