Guest post by Jimmy Gitonga, Outgoing iHub Community Lead
In 2012, Intel gave the iHub initial hardware for our Super Computer - a blade chassis populated with three blade serverswith Google contributing grant funding. The idea behind the iHub Cluster initiative was that it was to act as a sandbox for parallel programming, as well as to create a centre for high performance computing.
As with all hardware installations, we couldn’t get hardware fast enough to do what we wanted. And now, Super Computers are running on the Cloud.
So we officially announce the end of the iHub Cluster initiative. It will now be under the iHub Community infrastructure which includes our other hardware and software systems.
We will use the Super Computer hardware for research and training into CPU and GPU hybrid systems and software. This will revolve around parallel programming with areas like OpenCL and HSA being investigated.
Today, it’s Cloudy
When you look out on a nice day at the “Nairobi Blue” sky, it is likely you will see one or two clouds. While from down here, a cloud looks distinct, with edges, when you fly through one, you don’t fill the edge. So if this helps you remember, in computing, a “Cloud” looks like it has an edge of the hardware it is emulating. But because you are in the Cloud, when you need more resources, you add hardware underneath and the Cloud expands seamlessly. There is no edge.
Let’s move the idea a bit more. What is the “Cloud” really? With the jargon being produced around this new computing paradigm, we do need to make “chewable”.
At the beginning of Desktop computing, one bought some hardware that run an Operating System and on that some software applications. With time, desktop computers were connected together and files could move from one computer to another across the network. However to store files that were required by everyone on the network, a computer known as a “server” was set up within the network of computers. This way, everyone would have access to a particular file simultaneously. These networked computers became known as a LAN (Local Area Network) and if they were geographically apart, a Wide Area Network (WAN). Two things happened.
On one hand, the server became a very powerful computer, sometimes its resources would be equivalent to several desktop computers. The server had to be able to deliver files to ALL the desktop computers connecting to it simultaneously. Each desktop computer was considered a “client” to the server. This is use of a computer network in the Client-Server model.
Obviously, the resources on the server would be redundant most of the time since not all the desktop computers are served at the same time. So how can this excess power be tapped? Enter Virtualisation. Rather than buy another physical desktop upon adding a user, one could create a virtual desktop machine on the server and access it across a very fast network. This would allow for the purchase of a low-power desktop computer or computers that would simply be used to connect a monitor, keyboard and mouse to a virtual machine (VM) across the network, on the server. With time one can have multiple virtual machines, all working at the same time and being accessed when necessary from different client computers. This works very well in a situation where, in a school, there is a class of students, each with his or her client computer, working on a VM off the school server. When the class ends, only one machine is switched off, the server.
On the other hand, multiple physical desktop computers on a network become redundant when the users go home. Let’s say one is running a program on one desktop computer and wants to send parts of the work the program is processing to other computers running the same software across the network? A perfect situation is a content developer who wants to render out a 3D video. He can do it on his machine, taking longer, or he can divide the rendering task, send instructions as messages and make the other machines render out each frame of the movie. On completion, each frame is sent to the “master” and is combined with others to form the video piece on the first machine. This is the Peer-to-Peer model. This is the point at which Super Computing takes off. WETA Digital built a Super computer to render out the Avatar movie.
Let us take virtualisation to a new level. With the Client-Server model in our earlier school example, turn the initial server on the network into a virtual machine The desktop computers connect to a network switch. Each desktop computer, through the switch, accesses the network server which is now a virtual machine on a bigger server. This bigger server offers VMs as servers, each server “serving” a particular network. But it becomes clear quickly that the bigger server would be complex and expensive for a medium sized school to maintain and its operating system on this bigger server would be very different from a desktop computer.
This is where the Cloud would come in. Rather than maintain the expensive bigger server, one would create a virtual bigger server which would be hosted on a network outside the school. And this virtual bigger server would be costed as a service based on time and resources used. At night, barely anything would be running and thus the bigger server on the Cloud would “power down” and in the morning when school begins, it would “power up”. So one can add Virtual LAN servers without worrying about running out of hardware resources. Thus the Cloud is built on a virtualised computing infrastructure allowing it to be served and paid for as a service. This allows automation, scalability, agility and on-demand service delivery almost instantly.
What we have illustrated here is Infrastructure-as-a-Service. The Cloud is virtualisation of any level of computing that can be offered as a service. The 3 main ones are, Infrastructure-as-a-Service, Platform-as-a-Service and Software-as-a-Service. For the people who want to set up their own private clouds, there is Metal-as-a-Service given by some providers.
Angani, Kili and the iHub Cloud
The Cloud is here and our corporate partners Intel and Microsoft Kenya gave us a total of 3 servers to which we have put together the iHub Cloud. The iHub Cloud has been put together with the help of Kili.io. This is a new Cloud Provider here is Kenya that is hoping to be up and running in a matter to weeks as we speak. The iHub Cloud is only accessible to people who are sitting at the iHub. It is free for an initial period of 6 months. It is built to be a development environment which includes billing and resource management info so that a developer can know what it will cost when the system goes live in production.
That is not all, a number of iHub core members have gone out and set up Angani. This is system is up and serving a growing number of corporate clients. The iHub has been granted the ability to give a free 6 month period to any iHub member who wants to deploy in a production environment. This means that your system will be accessible across the Internet.
To get on these systems immediately, ping us at cloud(at)ihub(dot)co(dot)ke.