Fault, Configuration, Accounting, Performance and Security Management of Distributed Transactions

Fault, Configuration, Accounting, Performance and Security Management of Distributed Transactions

Dr. Rao Mikkilineni
Kawa Objects Inc.
IEEE Member
Los Altos, USA

Dr. Giovanni Morana and Daniele Zito
University of Catania
Catania, Italy
giovanni.morana@dieei.unict.it, and zito.daniele@gmail.com

Download this article:

Abstract- This paper describes a prototype implementing a high degree of transaction resilience in distributed software systems using a non-von Neumann computing model exploiting parallelism in computing nodes. The prototype incorporates fault, configuration, accounting, performance and security (FCAPS) management using a signaling network overlay and allows the dynamic control of a set of distributed computing elements in a network. Each node is a computing entity endowed with self-management and signaling capabilities to collaborate with similar nodes in a network. The separation of parallel computing and management channels allows the end to end transaction management of computing tasks (provided by the autonomous distributed computing elements) to be implemented as network-level FCAPS management.

While the new computing model is operating system agnostic, a Linux, Apache, MySQL, PHP (LAMP) based services architecture is implemented in a prototype to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance. The implementation is made possible by a non-von Neumann middleware library providing Linux process management through multi-threaded parallel execution of self-management and signaling abstractions.


The advent of many-core severs with hundreds and even thousands of computing cores with high bandwidth communication among them makes the current generation server, networking and storage equipment and their management systems which have evolved from server-centric and bandwidth limited architectures completely unsuited to use in the next generation computing infrastructure efficiently. It is hard to imagine replicating current TCP/IP based socket communication, “isolate and fix” diagnostic procedures, and the multiple operating systems (that do not have end-to-end visibility or control of business transactions that span across multiple cores, multiple chips, multiple servers and multiple geographies) inside the next generation many-core servers without addressing their shortcomings. In order to cope with the scaling issues and utilize many-core technologies effectively, next generation service architecture has to emulate the architectural resiliency of cellular organisms that tolerate faults and implement command and control structures which enable execution of self-configuring, self-monitoring, self-protecting, self-healing and self-optimizing (in short self-*) business processes.

Figure 1 shows the evolution of current computing infrastructure with respect to three parameters – system resiliency, efficiency and scaling. The resiliency is measured with respect to a service’s tolerance to faults, fluctuations in contention for resources, performance fluctuations, security threats and changing business priorities. Efficiency is measured in terms of total cost of ownership and return on investment. Scaling addresses end-to-end resource provisioning and management with respect to increasing number of computing elements required to meet service needs.

Figure 1
Figure 1. The Resiliency, Efficiency and Scaling of Information Technology Infrastructur.
Grid and cloud computing management brings automation of physical and virtual resources management.

As information technologies evolved from server-centric computing to Internet/Intranet based managed grid and cloud computing technologies, the resiliency, efficiency and scaling are improved by automating many of the labor-intensive and knowledge-sensitive resource management tasks to meet the changing application/service needs.

Unfortunately, extending current state of the art to develop applications that harness the full power of many-core systems is difficult and requires software developers to transition from writing serial programs to writing parallel programs [1]. Parallel applications share data, and current thread technologies used to modifying them behave in unpredictable ways resulting in a complex web of debugging and optimizing strategies. To obtain the required behavior, the access to shared data should be coordinated between all cores using proper synchronization techniques and applying suitable policies and patterns based on overall system goals, relative priorities of various tasks, and latency constraints. Current computing techniques, operating systems that have to effectively supply multicore resource management, and high-level application programming that supports distributed transaction management have to be reexamined to leverage parallelism of processing cores.

In addition, current approaches to resource management, albeit with automation, are not sensitive to the distributed nature of transactions and contention resolution of shared distributed resources, at best, is complex involving many layers of management systems. As von Neumann [2] pointed out, current design philosophy that “errors will become as conspicuous as possible, and intervention and correction follow immediately” does not allow scaling of services management with increasing number of computing elements involved in the transaction. Comparing the computing machines and living organisms, he points out that the computing machines are not as fault tolerant as the living organisms. He goes on to say "It's very likely that on the basis of philosophy that every error has to be caught, explained, and corrected, a system of the complexity of the living organism would not run for a millisecond." More recent efforts, in a similar vein, are looking at resiliency borrowing from biological principles [3] to design future Internet architecture.

In this paper, we will revisit the design of distributed systems with a new non-von Neumann computing model (called Distributed Intelligent Managed Element (DIME1) Network computing model) [4, 5, 6 and 7] that integrates computational workflows with a parallel implementation of management workflows to provide dynamic real-time FCAPS management of distributed services and end-to-end service transaction management.

The DIME network architecture provides a new direction to harness the power of many core servers with the architectural resiliency of cellular organisms and a high degree of scaling and efficiency. It also eliminates many of the shortcomings of current solutions being proposed for solving the scalability issue in these systems, i.e. the use of SSI [8] or the introduction of multiple instances of the OS in a single enclosure with socket communication among them instead of high-speed shared memory or PCIExpress (e.g. [9]), which are inefficient because they increase the management complexity. A review of other operating system approaches has been presented elsewhere by one of the authors [7].

1DIME™, Cloud-DNA and Dime Network Architecture are Trade Marks of Kawa Objects Inc.

The focus of this paper is the demonstration of resiliency, scaling and efficiency of the new computing model in a conventional Linux operating system environment by injecting a non-von Neumann middleware to introduce self-management (FCAPS) and network-aware signaling abstractions (alerting, addressing, supervision and mediation). We have chosen a popular LAMP based web-services infrastructure to demonstrate end-to-end transaction management based on business priorities, workload fluctuations, component failures and latency constraints. We demonstrate auto-scaling, self-repair, performance management, end-to-end transaction security assurance and live-migration of services without the need for a Hypervisor or other server virtualization technologies or any new standards. This implies that DIME networks offer computing & storage on demand, without the need for an additional “Hypervisor Layer”, simplifying the parallelization schema and, at the same time, speeding up the communication amongst various components.

In addition, using the features provided by FCAPS management, we have found that the DIME network infrastructure simplifies the managed services development. The developer, in fact, has to focus only on the algorithmic part (i.e., the computing workflow) of the service, leaving the management issues (such as fault, configuration, accounting, performance and security management) to the DIME infrastructure. Above all, this approach offers to leverage current OSs by converting each process into a DIME while also allowing the development of a native distributed, parallel and scalable operating system as discussed elsewhere [6 and 7]

The paper is organized as follows. In section II, we briefly review the new computing model and the DIME network architecture (DNA) that allows programming self-* services creation, delivery and assurance frameworks. In section III, we use DNA to implement a Linux, Apache, MySQL, and PHP (LAMP) based services architecture to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance. The implementation demonstrates a true decoupling of services and their management from the hardware infrastructure and its management and shows the resiliency, scaling and efficiency that go beyond current state of the art. In section IV, we present a discussion with some thoughts on future directions for continued research. In Section V we conclude the paper with some thoughts on DNA in biology and DNA in information technologies.


The DIME computing model exploits the parallelism to implement a signaling network overlay over a network of von Neumann SPC computing nodes (cores in a multi-core server using a new operating system [6 and 7] or Linux Processes in conventional computing [5]). Multiple threads available in each core or a Linux process implementation are exploited to implement a self-managed computing element called the DIME. Each DIME presents a computing element that can execute a managed computing process with fault, configuration, accounting, performance and security management. Figure 2 shows a comparison between the von Neumann SPC computing model and the DIME computing model.

Figure 2
Figure 2. The Resiliency, Efficiency and Scaling of Information Technology Infrastructure.
For a description of the DIME Network Architecture and the Genetic transactions, please see the video http://youtu.be/Ft_W4yBvrVg

The parallelism of service execution and service control allows real-time monitoring of service behavior and management based on policies and constraints specified by the regulators both at the node level and at the network level. The DIME network architecture thus allows the description and management of the service to be separated from the execution of the service (using a computing thread called Managed Intelligent Computing Element, MICE). The signaling control network allows parallel management of the service workflow. In step 1, the service regulator instantiates the DIME and provisions the MICE based on service specification. In step 2, The MICE is loaded, executed, and managed by the service regulation policies. At any time, the MICE can be controlled through its FCAPS management mechanism by the service regulator.

There are three key features in this model that differentiate it from all other models:

  1. The self-management features of each SPC node with FCAPS management using parallel threads allow autonomy in controlling local resources and provide services based on local policies. Each node keeps its state information and history2 of its transactions. The DIME node provides managed computing services, using the MICE to other DIMEs based on local and global policies.
  2. The network aware signaling abstractions allow a group of DIMEs to be programmed to manage themselves with sub-network/network level FCAPS management based on group policies and execute a service workflow as a managed directed acyclic graph (DAG).
  3. Run-time profile based FCAPS management (at the group level and at the node level) allows a composition scheme by redirecting the MICE I/O to provide recombination and reconfiguration of service workflows dynamically at run-time.

2 The concept of state awareness and history of computational transactions provided at the node level and at the network level introduces a non-Markovian element into the DIME computing model which allows for diagnosis-after-the-fact to facilitate system level predictive corrections.

The MICE provides the logical type that performs everything that is feasible within that logical type (a Turing machine) and the DIME FCAPS management provides a higher logical type (management of the Turing machine) which describes and controls what is feasible in the MICE [10]. These features provide the powerful genetic transactions namely, replication, repair, recombination and reconfiguration that have proven to be essential for the resiliency of cellular organisms [11].

We have applied the DIME computing model to convert a Linux process into a DIME with self-management (FCAPS management of the Linux process) and signaling awareness to create a managed DIME network implementing a managed workflow. The details of injecting the DNA in Linux OS using a non-von Neumann middleware are described elsewhere [5] and it requires no special accommodations from the operating system. In fact, the non-von Neumann middleware uses standard OS services so that it can be easily ported to other operating systems offering multi-threading capabilities.

In the next section, we describe the use of a DIME network (each DIME encapsulating a Linux process with FCAPS management) to implement LAMP based web services architecture to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance.


The separation of the service design by splitting the service regulator component and the service execution package as shown in Figure 3 allows the description and control of the service to be separated and made available at run time providing the resiliency in services management. This separation also makes possible the genetic transactions of replication, repair, recombination and reconfiguration that are the distinguishing characteristics of cellular organisms. Each DIME executes a set of tasks arranged in a DAG. Each node of this DAG contains both the task executables (which itself could be another DAG) and the profile DAG as a tuple < task (SP), profile (SR)>: in this way, it is possible not only to specify what a DIME has to do or execute but also its management (how this has to be done and under what constraints). These constraints allow the control of FCAPS management both at the node level and the sub-network level. In essence, at each level in the DAG, the tuple gives the blueprint for both management and execution of the down-stream graph. Under these considerations, it is easy to understand the power of the proposed solution in designing self-configuring, self-monitoring, self-protecting, self-healing and self-optimizing distributed service networks.

Figure 3
Figure 3: The Anatomy of a DIME and the separation of service regulation and service execution workflows

The DIME network architecture takes its cues from parallels in cellular biology where regulatory genes control the actions of other genes which allow them the ability to turn on or turn of specific behaviors. As affirmed by Philip Stanier and Gudrun Moore, [11] “In essence, genes work in hierarchies, with regulatory genes controlling the expression of ‘downstream genes’ and with the elements of ‘cross-talk’ between the regulatory genes themselves.” The same parallel, furthermore, exists between the task profile and the concept of gene expression. Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product.

Figure 4 shows a DIME network of Linux processes implementing a web services workflow using a MySQL database and Apache and PHP services.

DNA enables the application or service running in the MICE under the control of the FCAPS manager to provide fault and performance information through the intra-DIME signaling which is utilized by the end-to-end DIME network management infrastructure using the Inter-DIME signaling. The policies are implemented by the DIME service network managers (the Supervisor, Fault Manager, Performance Manager, Accounting Manager and the Security Manager designed to execute policy implementation workflows.)

A simple workflow is as follows:

  1. Local performance manager in DIME 2 receives notifications about the response time of the web sites deployed in Apache.
  2. When the performance exceeds a threshold, the signaling channel is used to notify, via network Performance Manager, the Supervisor that instantiates an additional Apache3 in a new DIME.
  3. The Supervisor, based on business priorities, workload management policies, and latency constraints coordinates with Configuration and Security Managers of the network to instantiate the new instance of Apache with appropriate configuration.
  4. The network Configuration Manager, instantiates the new service and adjusts the work-loads modifying the DNS rules as required.

Figure 4
Figure 4: DIME network implementing web services using LAMP services with FCAPS management at both the node level and at the network level.

3 A consistent copy of the first Apache

A similar management workflow regulates the fault management using the heartbeats provided by each DIME at a regular interval to the network Fault Manager. The database response time similarly is also monitored by periodically querying the MySQL database and appropriate policies are enforced to meet business priorities. The scaling up or down by the configuration manager is implemented based on the workloads, latency constraints and overall business priorities.

The policies are implemented at various levels; at the node level and at the sub-network or network level. In addition to domain specific service management workflow, each DIME implements its own local FCAPS management independent of what MICE processes are doing. This allows programming DNA level DIME instantiation, and its life-cycle management to assure 100% infrastructure availability, performance and security service levels. While a simple end-to-end transaction security check is performed with a login and password scheme that allows service network management, a more elaborate authentication, authorization and accounting scheme is discussed in another paper by Tusa et al [12].

We believe that the DIME network architecture represents a major departure from the current cloud approaches [13, 14]. Using the non-von Neumann approach, it radically improves the resource exploitation using parallelism even within the cloud environments, hiding the complexity of the management of FCAPS issues both from the developers and users of cloud services.


The DIME computing model attempts to fill the need to break the von-Neumann bottleneck and leverage the hardware upheaval to improve the resiliency, efficiency and scaling of future services infrastructure as shown in Figure 5.

Figure 5
Figure 5: The resiliency, efficiency and scaling with non-von Neumann Middleware with Linux operating system showing the transition from a physical server to virtual server to a virtual service container as an atomic managed computational unit

While the paper discusses the use of DNA injection into Linux OS, we see no technical obstacle to do the same in other operating systems. The key requirement is the multi-threading capability to implement parallel management workflow to control the Turing machine implemented as a process in the conventional OS. It is also proven [7] that the DNA can be injected at the core with a native OS written from scratch that scales and provides the resiliency. It is also proven that we can leverage current service oriented architecture, development environments and workflow implementations using a network of Turing machines by migrating them to a managed network of Turing machines using DNA. It is interesting to note that the services management decoupling from the infrastructure hardware management using DNA does not require any new standards or approaches except exploiting parallelism to separate the management workflow and computing workflows at the core or at the process level using self-management, signaling and network management abstractions.

In designing this new class of distributed systems, it behooves us to go back and seriously study von Neumann’s views on the subject [2]. Talking of cellular organisms and how they operate across errors, he points out that “the system is sufficiently flexible and well organized that as soon as an error shows up in any part of it, the system automatically senses whether this error matters or not. If it does not matter, the system continues to operate without paying any attention to it. If the error seems to the system to be important, the system blocks that region out, by-passes it, and proceeds along other channels. The system then analyzes the region separately at leisure and corrects what goes on there, and if correction is impossible the system blocks the region off and by-passes it forever. The duration of operability of the automaton is determined by the time it takes until so many incurable errors have occurred, so many alterations and by-passes have been made, that finally the operability is really impaired. This is a completely different philosophy which proclaims that the end of the world is at hand as soon as the first error occurred.”

In order to benefit from the approach adopted by the cellular organisms, current services management approaches must implement two features at the core computing element (a von Neumann computing node). First, they must implement self-management based on local history and local policy requirements. Second it must provide a parallel signaling channel for a network of self-managed computing elements to communicate and collaborate to implement global policies.

While current cloud and grid management systems implement services management by monitoring various application or service characteristics with the use of various management systems, the applications or services that use local operating systems in each node still have their resource and service management serialized using the von-Neumann SPC computing model. The DNA addresses this by implementing the separation at the computing node by exploiting parallelism.

Discussing the work of Francois Jacob and Jacques Monod on genetic switches and gene signaling, Mitchell Waldrop [15] points out that "DNA residing in a cell's nucleus was not just a blueprint for the cell - a catalog of how to make this protein or that protein. DNA was actually the foreman in charge of construction. In effect, DNA was a kind of molecular-scale computer that directed how the cell was to build itself and repair itself and interact with the outside world.”

We believe that the DIME network architecture enabling the execution of a workflow as a managed directed acyclic graph provides at least a mechanism for a blueprint for enterprise business process description, replication, execution, and control using a lengthy recursive sequence of nested programs which unfold in the von Neumann computing world using a non-von Neumann computing model.

Future directions of this research are self-evident. First, the non-von Neumann middleware can be exploited to improve resiliency, efficiency and scaling of current grid and cloud services by decoupling services management from the infrastructure hardware management. This approach allows implementing reliable and resilient services using unreliable hardware just as the cellular organisms do.

Few immediate applications present themselves:

  1. Dynamic many-core cluster communication management across multiple Linux images to choose the type of communication based on available resources and service requirements.
  2. Implement WAMP services architecture using DIME network architecture
  3. Application aware resource allocation (dial-up and dial-down) at run time.
  4. Resilient services oriented architecture (RSOA) implementation through the migration of the services micro-container [16] into a DIME.
  5. High performance computing (HPC) resource scheduling and management.

Secondly, the hardware infrastructure itself can be redesigned (exploiting the many-core architecture) to become signaling aware and respond to application requests at run time. Future storage and networking hardware thus can be simplified with hardware assisted DIME architecture to eliminate current layers of management software and special purpose ASIC implementations. They can be designed to dial-up and dial-down raw resources (number of cores, memory, bandwidth, throughput, storage capacity etc.) based on application requests at run time.

Finally, DNA can be implemented by chip vendors in hardware to provide self-management and signaling awareness exploiting parallelism at the core. This allows uniformity in hardware device drivers with self-management and signaling awareness.

On the theoretical side, it is worth examining the intriguing remarks of von Neumann about Gödel's theorem and its implications on the descriptions of complexity [17]. In his Hixon Symposium talk, von Neumann remarks “It is a theorem of Gödel that the next logical step, the description of an object, is one class type higher than the object and is therefore asymptotically infinitely longer to describe.” He goes on to say “It is one order of magnitude harder to tell what an object can do than to produce the object.” The DNA attempts to describe and assure what an object does; in this case the object happens to be a von Neumann computing node. In the light of the new resiliency of DNA (e.g. the DIME can be instantiated and managed to provide 100% availability and recoverability), it is worthwhile to revisit the classic distributed computing issues such as the dining and drinking philosopher problems, the CAP theorem etc.


In conclusion, we observe that the evolution of living organisms has taught us that the difference between survival and extinction is the information processing ability of the organism to:

  1. Discover and encapsulate the sequences of stable patterns that have lower entropy, which allow harmony with the environment providing the necessary resources for its survival,
  2. Replicate the sequences so that the information (in the form of best practices) can propagate from the survived to the successor,
  3. Execute with precision the sequences to reproduce itself,
  4. Monitor itself and its surroundings in real-time, and
  5. Utilize the genetic transactions of repair, recombination and rearrangement to sustain existing patterns that are useful.

That the computing models of living organisms utilize sophisticated methods of information processing, was recognized by von Neumann who proposed both the SPC computing model and the self-replicating cellular automata. Later Chris Langton created computer programs that demonstrated self-organization and discovery of patterns using evolutionary rules which led to the field of artificial life and theories of complexity.

In this paper, we focus on another aspect that we learn from the genes in living organisms that deals with precise replication and execution of encapsulated DNA sequences. We describe a computing model, recently proposed, extending the stored program control computing model to create self-configuring, self-monitoring, self-healing, self-protecting and self-optimizing (self-managing or self-*) distributed software systems. As opposed to self-organizing systems that evolve based on probabilistic considerations, this approach focuses on the encapsulation, replication, and execution of distributed and managed tasks that are specified precisely.

According to biologist Sean B. Carroll [18], “cells communicate with one another by sending signals in the form of proteins that are exported and travel away from their source. Those proteins then bind to receptors on other cells, where they trigger a cascade of events, including changes in cell shape, migration, the beginning or cessation of cell multiplication, and the activation or repression of genes.” Signaling also has proven to be a critical element in telecommunications networks and human network communications.

Signaling in the DIME network computing model is as important as it is in cellular organisms to provide resilience. In this paper we demonstrate its use in building resilient LAMP services using conventional computing infrastructure.


The authors wish to acknowledge many valuable discussions with and encouragement from Kumar Malavalli, and Albert Comparini from Kawa Objects Inc., and Marco Di Sano from University of Catania, who have contributed to the development of the DIME Network Architecture and the prototype.


  • [1] David Patterson, “The trouble with multi-core”, IEEE Spectrum, July 2010, p28
  • [2] Neumann, J. v. (1987). Theory of Natural and Artificial Automata. edited and compiled by William Aspray and Arthur Burks, MIT Press, p408 and 474. (Charles Babbage Institute Reprint Series for the History of Computing vol 12.)
  • [3] Balasubramaniam, S., Leibnitz, K., Lio’, P,. Botvich, D., and Murata, M. “Biological Principles for Future Internet Architecture Design”, IEEE Communications Magazine, July 2011, Vol. 49, No. 7, p44.
  • [4] Mikkilineni, R “Is the Network-centric Computing Paradigm for Multicore, the Next Big Thing?” Retrieved July 22, 2010, from Convergence of Distributed Clouds, Grids and Their Management: http://computingclouds.wordpress.com
  • [5] Morana, G., and Mikkilineni, R., “Scaling and Self-repair of Linux Based Applications Using a Novel Distributed Computing Model Exploiting Parallelism". IEEE proceedings, WETICE2011, Paris, 2011
  • [6] Mikkilineni, R. and Seyler, I. "Parallax – A New Operating System for Scalable, Distributed, and Parallel Computing", The 7th International Workshop on Systems Management Techniques, Processes, and Services, Anchorage, Alaska, May 2011
  • [7] Mikkilineni R., and Seyler, “Parallax – A New Operating System Prototype Demonstrating Service Scaling and Self-Repair in Multi-core Servers”, IEEE proceedings, WETICE2011, Paris, 2011
  • [8] Buyya, R., Cortes, T., Jin, H. (2001), Single System Image, International Journal of High Performance Computing Applications 15 (2): 124-135
  • [9] http://www.seamicro.com.
  • [10] Neumann, J. v. (1987). Theory of Natural and Artificial Automata. edited and compiled by William Aspray and Arthur Burks, MIT Press, p454. (Charles Babbage Institute Reprint Series for the History of Computing vol 12.)
  • [11] Stanier, P and Moore, G. (2006), "Embryos, Genes and Birth Defects", (2nd Edition), Edited by Patrizia Ferretti, Andrew Copp, Cheryll Tickle, and Gudrun Moore, John Wiley & Sons, London, p 5
  • [12] Tusa, F., Celesti, A., and Mikkilineni, R., “AAA in a Cloud-Based Virtual DIME Network Architecture (DNA),” IEEE proceedings, WETICE2011, Paris, 2011.
  • [13] Buyya, R. and Ranjan, R.: "Special section: Federated resource management in grid and cloud computing systems" Future Generation Computer Systems 26 (2010) 1189-1191
  • [14] Buyya, R., Yeo, C.S., Venugopal, S., Broberg, J., Brandic, I. "Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility" Future Generation Computer Systems, Volume 25, Issue 6, June 2009, Pages 599-616
  • [15] Waldrop, M. M., “Complexity: The Emerging Science at the Edge of Order and Chaos”, Simon and Schuster Paperback, New York, 1992, p 31.
  • [16] Mohamed, M., Yangui, S., Moalla, S., and Tata, S. "Web service micro-container for service-based applications in Cloud environments", 2011 20th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, IEEE Computer Society, Conference Publishing Services (CPS), 2011, p 61.
  • [17] Neumann, J. v. (1987). Theory of Natural and Artificial Automata. edited and compiled by William Aspray and Arthur Burks, MIT Press, p456, p457. (Charles Babbage Institute Reprint Series for the History of Computing vol 12.)
  • [18] Carroll, S. B., “The New Science of Evo Devo - Endless Forms Most Beautiful”, New York: W. W. Norton & Co. 2005, p12, p106, p113 and p129.

Dr. Rao Mikkilineni received his PhD from University of California, San Diego in 1972 working under the guidance of prof. Walter Kohn (Nobel Laureate 1998). He later worked as a research associate at the University of Paris, Orsay, Courant Institute of Mathematical Sciences, New York and Columbia University, New York.

He is currently the Founder and CTO of Kawa Objects Inc., California, a Silicon Valley startup developing next generation computing infrastructure. His past experience includes working at AT&T Bell Labs, Bellcore, U S West, several startups and more recently at Hitachi Data Systems.

Dr. Giovanni Morana received his PhD from University of Catania, Italy and is currently at the University of Catania.

Daniele Zito is a PhD student at the University of Catania working on distributed computing and Grid computing research.

Dr. Mikkilineni and Dr. Giovanni Morana co-chair the 1st track on Convergence of Distributed Clouds, Grids and their Management in IEEE International WETICE2011 Conference.

For more complete information about compiler optimizations, see our Optimization Notice.