This guide is intended to help developers port existing ARM*-based NDK applications to x86. If you already have a working application and need to know how to quickly get your application recognized on the Android* Market by x86 devices, this document should provide you with the information to get started. The guide also provides tips and guidance if you run into compiler issues during the porting process.
- NDK Overview
- Porting Overview
- Porting Tips ARM* to x86
The NDK is a great tool for combining the power of native x86 code with the graphical interface of a wrapper Android* application. While the tool can be used to reap performance benefits in some applications, some caution needs to be taken as this isn't always the case.
The purpose of the NDK is to aide developers as follows:
- Compiling a native C/C++ library to be used (called by wrapper Java* code) in an Android* package
- Recompiling ARM* native library (libraries) to x86 (Intel® Atom™ microarchitecture) and possibly porting as needed
It may be the case that the second point mentioned may simply require just a build flag change and recompile, but sometimes it isn't that easy. For example, if the native library involves inline assembly within C code, that code can't simply be assembled and work "as is" on the two different architectures; some rewriting will be needed (see the section discussing ARM* NEON* versus Intel SSE).
The interface bridging the Android* Java* code with native code precompiled by the NDK is known as the Java* Native Interface (JNI). More information can be found here: http://java.sun.com/docs/books/jni/.
The aforementioned link is an extensive, deep dive into the specification of JNI. For a quicker overview, the Wiki page suffices (when in doubt, always cross-reference the specification for correctness): http://en.wikipedia.org/wiki/Java_Native_Interface.
The overhead of JNI is costly, and thus ideally, the JNI calls made in the application should be kept minimal. Specifically, using native code in an Android* application doesn't guarantee a performance boost! Commonly, performance gains can be appreciated in cases such as when the native code involves CPU-centric operations (such as heavy – duty usage of SSE instructions), but at other times, when for example, the application at hand is simply providing a complex web interface to the user, the use of native code through JNI may hinder performance. There is no "written rule" as to when the NDK should and shouldn't be used, but these points provide some general guidelines and things to look out for.
Developers can grab the latest version here: http://developer.android.com/sdk/ndk/index.html. As of NDK r6b, the NDK can be used to build both ARM*–based and x86 (Intel® Atom™ microarchitecture)–based native libraries. This provides developers with the convenience of native code porting all inclusive in one package.
The developer will create an Android.mk makefile for a project, and optionally, an Application.mk file. The Application.mk file is used to describe which native modules are needed by your application. The Android.mk file is used to control how and from what a module (static/shared library) is built. Here is a snippet of a simple Android.mk file:
Figure 1: Simple Android.mk file contents
The build system will prepend lib and generate a library with the name libtest.so. LOCAL_SRC_FILES, as expected, is where the developer would provide the names to project source files. LOCAL_LDLIBS and LOCAL_CFLAGS give the option for specifying linking flags and compilation flags, respectively.
At the command line, here is an example of how to specify that the build targets the x86 architecture: ndk-build APP_ABI=x86
There are two methods that can be used for calling a native library: System.loadLibrary("relative_path_and_name") and System.load("full_path_to_lib_file"). The former is more commonly used and is more robust. When using the former, the "lib" part of the library name specified by the Android.mk file can be dropped. Here is an example of such call:
Figure 2: Example Call to Native Code
Additionally, on the native code side, the developer needs to ensure that the entry method into the native code has the correct JNIEXPORT method signature, as opposed to a typical C/C++ header. The aforementioned JNI links can provide more information on this.
The developer can elect to load a native library either by providing it in the Android* apk package and referencing it at runtime or providing the absolute path to the library location on the Android* filesystem. This is developer preference and should be properly handled accordingly.
Using the adb logcat command, the developer can ensure the target native library successfully loads at runtime. Here is an example system log describing that a native library has loaded. Note that the full path to the native library file is provided.
Figure 2: Example Call to Native Code
The aforementioned sections provide a head start on using the NDK. For more intricate details, start with the great documentation that comes with the NDK package. There are some great tutorials along with example source code for various applications.
Porting an existing NDK application to x86 for most applications should be very straightforward. Unless the native code uses ARM*-specific features, porting the app should be as simple as recompiling, repackaging and republishing.
The following will walk you through the steps needed to get your NDK application ported to x86.
- Obtain the latest NDK tools. X86 support was first added in android-ndk-r6 but still contained a few issues and was shortly after fixed. Be sure you have downloaded and installed the latest (at the time of writing the latest is android-ndk-r6b) NDK from the Android* NDK site.
- If you have an Application.mk file, edit the APP_ABI line to include x86. Example:
APP_ABI := armeabi armeabi-v7a x86
If you didn’t use an Application.mk file, add x86 to the command line build, here is the command line and output from building one of the NDK sample applications.
$ ndk-build APP_ABI="armeabi armeabi-v7a x86"
Install : test-libstl => libs/armeabi/test-libstl
Install : test-libstl => libs/armeabi-v7a/test-libstl
Install : test-libstl => libs/x86/test-libstl
- From the previous step we see that a folder containing the binary of each architecture was created under the libs directory. The next step is to repackage the APK to include the new libraries. Since the libs directory is under the root project folder, the build tool to create the APK is already aware of binaries in that folder. From Eclipse, simply rebuilding the project APK is enough to incorporate the new x86 binary. Same goes with command line building. Here is sample output from rebuilding the sample demo hello-jni:
$ android.bat update project --path C:/Tools/android-ndk-r6b/samples/hello-jni
Added file C:\Tools\android-ndk-r6b\samples\hello-jni\build.xml
Added file C:\Tools\android-ndk-r6b\samples\hello-jni\proguard.cfg
$ ant -f hello-jni/build.xml debug
[echo] Running zip align on final apk...
[echo] Debug Package: android-ndk-r6b\samples\hello-jni\bin\HelloJni-debug.apk
- That’s it, next step is to run and test on an Intel Architecture device or x86 emulator. One last step that verifies all the binaries are packaged correctly is to use a zip archival tool to open the APK and ensure the binaries are present. Here is a screenshot of what the APK structure looks like with the x86 binary present.
Porting Tips ARM* to x86
Application porting to x86 should be straightforward, but it’s possible that assumptions have been made along the way and now differences between the Intel® Atom™ and ARM* architectures need to be addressed in your code. The following topics talk about some of the possible issues you could encounter and how to address them.
It is possible that your build environment uses the toolchain directly instead of using the Android* build scripts. In the case of ARM* the path used is:
For x86 use the path:
For more information see the NDK document located in android-ndk/docs/STANDALONE-TOOLCHAIN.html.
There can be memory alignment mismatches when porting C/C++ code between ARM* and Intel® Atom™ microarchitectures. The following article provides a great example on this: /en-us/blogs/2011/08/18/understanding-x86-vs-arm-memory-alignment-on-android. The main point is that developers should consider explicitly enforcing alignment of data where needed in the design of the code. Otherwise, there is no guarantee it will be handled correctly on a different platform.
There are currently three supported Application Binary Interface (ABI) options when building NDK libraries:
- ‘armeabi’ – the default and will create binaries targeting ARM* v5TE based devices. Floating point operations with this target use software floating point operations. Binaries created with this ABI will work on all ARM* devices.
- ‘armeabi-v7a’ – Creates binaries that support ARM* v7 based devices and will use hardware FPU instructions.
- ‘x86’ – Generates a binary to support the IA-32 instruction set that includes hardware based floating point operations.
All of these ABI options support floating point operations and unless ARM*-specific assembly instructions are being used, should not cause problems while porting code to x86. One positive is that if you happen to be compiling only for ‘armeabi’ and are now supporting x86, you should see a performance gain with most floating point operations.
While all the bases can't be covered in this short article, the information provided below should give a quick overview of how the implementation of SIMD extensions differs in the cases of Intel Architecture and ARM*. With this overview, the developer will be provided with tools to start some simple coding exercises.
NEON* is an ARM* technology primarily used in multimedia (smartphones, HDTV, etc) applications. ARM* documents that its 128 bit SIMD engine–based technology, which is an ARM* Cortex*–A Series extension, has at least 3x performance over ARM* v5 architecture and at least 2x over the follow on, the ARM* v6. More information on the technology can be found here regarding a deeper dive into NEON along with other performance considerations: http://www.arm.com/products/processors/technologies/neon.php
The key idea is that registers are “chunked” together as a vector, where each register in the vector is an element that matches the data type of the other elements. Then, operations are performed in such a way that they are performed across lanes, making the methodology known as Packed SIMD.
SSE is the Streaming SIMD extension for Intel Architecture (IA). The Intel® Atom™ currently supports up to SSSE3 (Supplemental Streaming SIMD Extensions 3). Atom™ does not support SSE4.x. This is also a 128-bit engine dealing with the packing of floating-point data. The execution model started with the MMX technology, and SSx is essentially the newer generation replacing the need for using MMX. For more information, look at the "Volume 1: Basic Architecture" section of the IA-32 and IA-64 Software Developer Manuals from Intel: http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html. Currently, the SSE overview section is in section 5.5. It provides the opcodes for SSE, SSE2, SSE3, and SSSE3Note that data operations usually involve manipulation of packed, precision-based floating point values, and bulk moves of data between XMM registers or between these registers and memory can be performed. XMM registers are essentially intended to be used as a replacement for MMX registers.
While using the aforementioned IA Software Developer Manual as a cross-reference for all the individual SSE(x) mnemonics, the developer is encouraged to also look at various SSE assembly-level instructions found at this link: http://neilkemp.us/src/sse_tutorial/sse_tutorial.html.
In the link, there is a "Table of Contents section where you can either jump directly into the code samples or peruse through some of the background information first." Similarly, the following manual directly from ARM* provides information and small NEON* assembly snippets: /sites/default/files/m/b/4/c/DHT0002A_introducing_neon.pdf. Refer to section 1.4 in the ARM* document.
Here are some key takeaways in comparing NEON and SSE assembly code in general (note that information is always subject to become stale as the technologies evolve and that based on the SIMD technology and application coding problem at hand, there may be some other subtleties):
Endian-ness. Intel only supports little-endian assembly, whereas ARM* supports big or little endian order (ARM* is bi-endian). In the code examples provided, the ARM* code is little-endian like Intel. Note though that there may be some compiler implications in the case of ARM*. For example, compiling for ARM* using GCC* has flags –mlittle-endian and –mbig-endian. Look here for more info:http://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html
Granularity. In the case of the simple assembly code examples referenced (and again, note this is not all-inclusive of the differences a developer may see between NEON and SSE), compare the ADDPS instruction for SSE (Intel) with VADD.ix for NEON (ie: x = 8 or 16). Notice that the latter bakes in some granularity on the data to be handled as part of the mnemonic referenced.
There are many API nuisances that may occur when porting C/C++ code NEON code to SSE. Keep in mind the assumption here is that inline assembly isn't being used, but rather, true C/C++ code.
One such difference between NEON and SSE seen at the higher level of programming involves handling large data sizes (128 bit). This article gives a brief example of such porting exercise: http://stackoverflow.com/questions/7203231/neon-vs-intel-sse-equivalence-of-certain-operations
Hopefully this guide helped prepare you to port an existing NDK based application to x86. Porting to x86 enables your device to be downloaded, purchased and used by an entire new category of Android* devices. If you run into problems in the porting process, please feel free to comment on this article and we will be glad to assist and answer questions.
* Other names and brands may be claimed as the property of others
Copyright © 2011 Intel Corporation. All rights reserved.
Intel and Atom are trademarks of Intel Corporation in the U.S. and/or other countries.
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm