64-Bit Android* OS

What does 64-Bit mean?

It means that the width of the integer registers and pointers is 64 bits

The three main advantages of 64-bit operating system are:

  1. Increased number of registers
  2. Extended address space
  3. Increased RAM

What do Android Dalvik developers need to do?

It's not hard to see Android phones with 64-bit chips in the not-to-distant future. Since the Android kernel is based on a Linux* kernel and Linux has supported 64-bit technology for years, the only thing Android needs to fully support 64-bit processing is to make the Dalvik VM 64-bit compatible. A Dalvik application (written only in Java) will work without any changes on a 64 bit device as the bytecode is platform independent. However, Dalvik application developers can take better advantage of 64-bit chips by recompiling their applications to target 64 bit processors.

What do native android application developers need to do?

Native application developers can take full advantage of the capabilities offered by the underlying processor. For example, Intel® Advanced Vector Extensions (Intel® AVX) has been extended to support 256-bit instruction size on 64-bit processors. Native application developers can do the following to target 64-bit processors:

  1. Re-engineer code to take advantage of other platform capabilities that are enabled on 64-bit chips.
  2. By default, all 32-bit applications run without a glitch on 64-bit processors, but might run slower than applications tuned to run on 64-bit processors.

Why should developers care about 64-Bit OSs?

The most important reason is that the 64-bit transition is happening. The 64-bit 4th generation Intel® Atom™ processor (code name BayTrail) is already out on the market. Companies like Qualcomm and Samsung are making announcements of 64-bit capability. These changes in the processor landscape will eventually trigger adding 64-bit OSs to operating system roadmaps.

On a generic level, there are not many significant differences between 64-bit and 32-bit processors. But compute-intensive applications (please see below for software workloads that run faster on 64-bit processors) can see significant improvements when moved from 32-bit to 64-bit. In almost all cases, 64-bit applications run faster in a 64-bit environment than 32-bit applications in a 64-bit environment, which is good enough reason for developers to care about it. Utilizing platform capabilities can improve the speed of applications that perform a large number of computations.

Why does the increased size of the CPU registers make the software faster?

Memory is extremely slow compared to the CPU, and reading from and writing to memory take a long time compared to how long it takes the CPU to process an instruction. CPUs try to hide this with layers of caches, but even the fastest layer of cache is slow compared to internal CPU registers. More registers means more data can be kept purely CPU-internal, reducing memory accesses and increasing performance.

Just how much difference this makes will depend on the specific code in question, as well as how good the compiler is at optimizing it to make the best use of available registers. When the Intel® architecture moved from 32-bit to 64-bit, the number of registers doubled from 8 to 16, and this made for a substantial performance improvement.

64-bit pointers allow applications to address larger RAM address spaces

Typically on a 32-bit processor, the addressable memory space available to a program is between 1-3 GB because only 4 GB is addressable. Even if 1-3 GB is available, a single program cannot use all the memory that is addressable unless it resorts to some techniques like splitting the program into multiple processes, which takes a lot of programming effort. On a 64-bit operating system, this is of no concern as the addressable memory space is pretty large.

Memory-mapped files are becoming more difficult to implement on 32-bit architectures because files of over 4 GB are more common. Such large files cannot be memory-mapped easily to 32-bit architectures—only part of the file can be mapped into the address space at a time. To access such a file, the mapped parts must be swapped into and out of the address space as needed. This is a problem because memory mapping, if properly implemented by the OS, is one of the most efficient disk-to-memory methods.

64-bit pointers also come with a substantial downside: most programs would use more memory because pointers need to be stored and they consume twice as much memory. An identical program running on a 64-bit CPU takes more memory than on a 32-bit CPU. Since pointers are very common in programs, it can increase the cache sizes and have an impact on performance.

What types of software workloads run faster due to increased CPU registers?

Register count can strongly influence performance of an application. RAM is slow compared to on-CPU registers. CPU caches help to increase the speed of applications, but accessing cache does result in a performance hit.

The amount of the performance increase is dependent on how well the compiler can optimize for a 64-bit environment. Compute-intensive applications that are able to do the majority of their processing within a small amount of memory will see significant performance increases because a large percentage of the application can be stored on the CPU registers.

Contrast this with an unoptimized application that might see a decrease in computer performance because 64-bit pointers require twice the bandwidth. However, in a mobile environment the operating system and installed applications should be engineered to avoid this.

Are 32-bit applications compatible with 64-bit CPUs and operating systems?

Both ARM* and Intel 64-bit CPUs have a 32-bit compatibility mode. While 32-bit applications will run on 64-bit processors, compiling with a 64-bit optimizing compiler allows them to take advantage of the architectural benefits of a 64-bit environment.

Intel's cutting-edge CPU enhancements will be on 64-bit processors

Intel continually introduces new, cutting-edge features into its 64-bit processors. Applications wanting to take advantage of Intel AVX, Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI), and other innovations need to compile in 64-bit mode.

Intel is actively encouraging developers to write applications that optimize for Intel architecture. Intel software developers are also working to optimize operating systems for 64-bit architectures and to create SDKs that expose CPU functionality to higher level developers.


  1. https://mikeash.com/pyblog/friday-qa-2013-09-27-arm64-and-you.html
  2. Wikipedia: http://en.wikipedia.org/wiki/64-bit_computing
  3. http://liliputing.com/2013/09/android-ready-64-bit-processing.html
Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.
Возможность комментирования русскоязычного контента была отключена. Узнать подробнее.