Slide 1 Iliyan: Oh, what a nice way to start this. With the words of a very smart man, who could not be more wrong. But will get back to Larry, later.
Slide 2 Let's begin with with who we are, and what makes us qualified the be here and give this presentation. I would like to introduce my business partner and good friend Venelin. Most of the time, he is responsible for turning my weird ideas into code. He loves all things Apple and hates Android as a platform, one opinion that I share with him, although for different reasons. And the other guy up on the board is me. I installed my first Linux OS back in 1997, after seeing Red Hat 4.0 running on a router. My mind was blown by the size of the virtual desktop in X and I instantly fell in love. Since then I'm a Linux-everywhere advocate and if something can't run Linux it means it is a piece of junk, at least in my universe. We’ve known each other since high-school, so it is about 12-13 years now? So at one point we just decided that since we both work in the IT industry it would be great to join forces and establish a start-up that deals with cloud computing and at the same time with embedded systems like handsets, set-top boxes and automotive computing. That’s how Evil Puppy Ltd. came around.
Slide 3 Let's get on topic. Today we'll be talking about open source software and how to use it to create our own clouds; how to utilize our private infrastructure more efficiently, so we can make our bottom line bigger and at the same time tackle a pressing issue – global warming, or for the more political correct among you – climate change.
Slide 4 Before we get to the technical part, let's get controversial! There's a global problem that everybody is familiar with, but doesn't like to talk about. It's this ‘insane’ idea that our planet is changing. We get freakish temperature changes, we get tons of natural disasters and somehow nobody wants to talk about the elephant in the room. These temperature anomalies don't just happen out of nowhere. It is us, the people living on this planet, that contribute a lot to what is going on, with our rising energy consumption and increased emissions. However, somehow all the scientific data got political. You would never guess who was the first politician who cared about this. It was Richard Nixon. The “Watergate” Richard Nixon. Yet 40 years later the people that set the tone for everyone else, the United States, are still the ones who have not ratified the Kyoto Protocol. There's an argument between the liberal left and the conservative right – the former group motivates their support for measures against climate change with the fact that this is affecting our everyday life and our future, while the latter are saying that cutting energy consumption and cutting emissions will bring economic slow down.
But why bring this leftist, socialist idea here, at the open fest? Well, some people don't realize it, but the open source community is driven by the leftist idea of socialism. It gets even further, open-source is in its essence communism. We are a community, we work together, we share our work, our knowledge and we don't mind if someone uses our work to make money, just as long they give something back to the community. That is why this idea, that we should care about what we use and how we use it, will resonate best with you, not with a society that builds itself on top of proprietary knowledge.
Slide 5 But how could this socialist mantra, that we should care about the well-being of everyone, could bring your costs down and at the same time take care of the planet; bring your productivity up and create a paradox that while you take care of your own wealth and well-being you take care of the wealth and well-being of the whole society? It is very simple. You cut costs. You cut your energy expenditure. I bet there are people in this room that run at least 20 or 30, or more, hardware servers at their company. So do the math. First there is the price of the servers, then you get at least some eco taxes, which would potentially grow more as the amount of hardware that you pile up grows. Then there's the cost of running those servers, mainly the huge electricity bills that you have to pay. And since most of the electricity is produced from fossil fuels, when you continue piling up hardware, you contribute more to the global warming process. But that is not all, you also have to cool down those servers. More hardware and more costs. At some point running all those machines is costing you an outrageous amount of money, that you can spend on something better, let's say to hire a few more developers. I know what you might be thinking right now. The electricity bills do not hit my bottom line that much, and the hardware is not that expensive even with these eco taxes that this guy is talking about. Well, you are almost right. The bills might not be that high and hardware could be very cheap right now, but as this problem gets worse, taxes will rise and the more junk you’ve piled up, the more affected your company will be. Why not just take something out of the libertarian playbook, why not be smart right now, create this extremely efficient platform that we can use so when the time comes we'll be on the safe side and in the mean time we'll be making even more money?
Slide 6 So enter “The Cloud”. Some of you are still not sure what you can do about those 20, 30 or more machines that are running in your server rooms. You can't just pull the plug and the fact is - you shouldn't. At least not on all of them. What you can do is turn those servers into your own private cloud. An internal IaaS of sorts. I bet all those machines that are running right now are not loaded 24/7. You probably keep 10 of them just for testing and development and even with a 24 hour work cycle, they are not that much used. Why not create some flexible way to migrate load around, or create new servers on the fly, or even destroy them if they are not needed? Why don't you use the thing that Larry Ellison so much hated “The Cloud”?
Slide 7 But what is the cloud and how does it work? Larry was right about something. Nowadays everything is a “Cloud”. Once simple web-applications, are currently deemed as clouds. For example blogs – Wordpress, Blogspot; or Salesforce. These are web-apps, though everyone calls them cloud-computing. Well, they aren't. Clouds are platforms that, in 99% of the cases, are based on commodity hardware, such as the servers in your server room. And there are two types of clouds – Public and Private. Public clouds are like the ones that Rackspace and Amazon run, where you pay for a “virtual” server and run standard software on top of it. Private clouds are the ones that do the same thing, except you run them on your hardware with your choice of underlying software, that are tailored to your specific needs. So what is the difference? Money and independence. To be independent you have to pay. You buy your own servers, your own storage, network equipment, etc, then employ people or other companies to set it up for you, teach you how to use it and that will cost you to. However, the independence you get really is worth the price overhead. On public clouds you are dependent on the people that run them. You can ask Foursquare, Netflix and Reddit how much their bottom line was hurt when Amazon's cloud went out. Secondly, there is nothing that can protect you against a price hike. On private clouds, you have all the control, in case something seriously wrong happens. You can use renewable energy like wind and solar to power them, or think of smart and efficient ways to cool them down. If you invest intelligently, your bottom line will get bigger. It's that simple.
Slide 8 But let’s take a look at the current situation. In 2007 cloud computing consumed 632 billion kWh. And by one Greenpeace study by the year 2020 it is projected to reach 1963 billion kWh. Which will produce 1032 megatonnes (Mt) of CO2 per year. Тo put that number in perspective, this is equal to the amount of CO2 that 203 million cars produce in an year. Slide 9 Yet those staggering numbers are not the worst case scenario. As you can see with the use of clouds even in 2011 we will save about 6.7 Mt of CO2, by the year 2014 we'll be saving about 25 Mt and by 2020 that number will reach almost 86 Mt. So we are obviously better off using cloud infrastucture. Slide 10 Again to put the numbers in perspective we have this nice chart for you that shows what the saving is in percentages, compared to running your IT infrastructure without implementing a cloud solution. In 2011 that is about 9% of savings. In 2014 this will reach 25% and in 2020 that number will be 50%. The other chart on the bottom shows what the expected rise of CO2 emissions is yer-over-year, with and without cloud infrastructures.
Slide 11 I think we’ve agreed that clouds are important. But how can we build a private cloud, a platform that is able to run standard software, on top of which we can build our business. We can go to Citrix, VMWare, Microsoft, Oracle, IBM and buy one of their solutions, but that is not really an option for us. It costs a ton of money, its proprietary and compared with open-source solutions they tend to be either extremely specialized or not that good at what they do. So open source it is. So far the best open source tool that allows us to manage all kinds of different hypervisors is libvirt. It is open source software that is virtualization-agnostic. What that means is that it can work with KVM, Xen, VMWare, Hyper-V, Virtual-Box, and a number of other technologies. The way libvirt works is - it runs as a service on the Linux host and manages the hypervizor and all the machines that run on top of it. It allows remote management of other libvirt running hosts and live migration of virtual machines between nodes, creating this “cloud” infrastructure that we are trying to build. Moreover, libvirt has a lot of language bindings, allowing you to create your own custom interface. Slide 12 But what use could a virtualization front-end be without a really good hypervisor. That is why since the acquisition of Qumranet, the original developer of KVM, Red Hat had kept the maintenance and development of KVM, the Kernel Virtual Machine. What is unique about KVM is that it is supported in all linux kernels since 2.6.20. In other words your favourite distro should support it out of the box. Another great feature of the KVM is that it is a very low-weight hypervisor, while at the same time it supports all the latest hardware virtualization extensions on both AMD and Intel CPUs. Those include Extended Page Tables and the RVI equivalent of AMD. KVM also allows direct access to PCI and PCI-Express hardware, which might not be an advantage obvious to everyone, but let me give you three scenarios, in which this is very beneficial. First, it works rather well for storage controllers, fibrechannel, RAIDs, etc. Second, if by any chance you are running an HSM (hardware security module) device, you can assign it directly to a virtual machine. And the third example - if by any chance you would like to virtualize your GPGPU servers you can assign Fermi or Tesla or FireGL hardware directly to a virtual instance. Another great advantage of KVM is the extreme flexibility of the hypervisor. It allows running both 32 and 64 bit x86 OSes without modification to them. You install them just as you do on a bare-metal hardware. You can run Linux, FreeBSD, OpenBSD, NetBSD, Solaris even less known OSes like QNX and the i386 version of Android run out-of-the-box. Slide 13 KVM supports this driver pack of sorts, called virt-io. It provides para-virtualized drivers for both storage devices and network devices. The storage device is especially important if you plan to do some very complex setup with multipathing on top of SANs or Infiniband and as we are building private clouds, chances are that you will. As to the network para-virtualized drivers, they enable running at network speeds very near the ones of the host, which theoretically should run at the speed of the hardware.
But all those things, though very notable, are not as important as the feature that makes KVM really shine. It gives you the ability to use system resources very productively. You can overcommit both memory and CPUs. With some very clever technique called KSM, enabled in vanilla kernels since 2.6.38 you can achieve mind blowing results. KSM is enabled in Red Hat Enterprise Linux 6.0 and above, even though it is running kernel 2.6.32, so it might be available on your server too. In one case, in which we built a small-sized cloud for a client, we migrated the infrastructure to libvirt with KVM running on top of EMC and HP SANs and we were able to achieve up to 250% higher density. In other words we completely dropped 10 blade PowerEdge 1955 servers, which cut power consumption somewhere between 8 and 10 kWh. By ramping up the density we managed to cut power consumption of one of the c7000 blade enclosures by 37%. In general a combination of libvirt, KVM and recent, well-optimized hardware could cut electricity bills by at least 25%. Slide 14 Yes and No! Yes, because you can really do far better with more efficient hardware, no because you would create a lot of hardware waste. For a lot of server functions those servers are still good enough. Slide 15 That is why for those old machines we suggest the use of OS level virtualization in the form of LXC containers. Like KVM, LXC also works on top of a vanilla kernel as it uses control groups. There is a libvirt driver for LXC which gives you the ability to manage all your virtualization techniques from a single common place. But how exactly does it work? I suppose most of you have heard of chroot jails. Well LXC is something like that, but on steroids. It gives you the ability to run multiple isolated Linux instances inside a Linux system host. Every isolated Linux uses the host's kernel. Sometimes you get weird situations where you see Debian or Ubuntu running on top of a Fedora kernel, but 99% of the cases, that is quite alright. As there is no hypervisor whatsoever there is almost no overhead. It means faster running instances and services. Unfortunately it also means a single instance can starve the entire server out of resources, especially system memory, as CPU resources are well managed. What LXC doesn't require is specific hardware instruction set to run, unlike KVM. This makes it especially attractive for some embedded and fringe server architectures like ARM and MIPS. Slide 16 There are some shortcomings. Even inside libvirt, LXC doesn't support live migration of hosts between hardware instances. At least not yet. Work is being done in that direction, however even if it is achieved, don't expect it to work like live migration with KVM, where it is fully transparent. In LXC what would be achieved is some state of snapshotting the OS, till it gets migrated to another hardware host, and that could lead to some nasty locking of processes. Another thing that LXC can't do is run anything different from Linux. You can run 32 bit and 64 bit i386 Linux OSes on top of 64 bit Linux kernel, but that is about it.
This is solely a community effort, there is no apparent industry support, with the exception of a couple of guys from Canonical who work on it. Despite the fact that it ships with most of the major distros, you will not see the backing that KVM gets. Slide 17 What's next for the business? Virtualizing servers and creating cloud infrastructure is great, but most companies use other machines besides servers. They tend to have people working there that need computers as well. And that means more bills and more hardware. The next big thing in the cloud will be virtualized desktops. Coincidentally open-source is on top of that, as well. Along with KVM, Qumranet developed a protocol called SPICE. SPICE is a great technology! It works just like VNC, but in a completely different manner. It allows for full 2D acceleration of graphics, while giving you the ability to use remote terminals for doing the work on top of those desktops. But how could “cloud” desktops help? Lets say every two to three years you have to upgrade all your hardware desktops. Not anymore. What you can do is run a lot of virtualized desktop instances on top of your private cloud. If you need more resources, just allocate more on the server side. That's it. Also you get the added bonus of all the information of developers, accountants, managers living inside the cloud. No more hard disk failures. You really need a natural disaster to loose your information. Slide 18 Iliyan: Here is a quote for you: “Carbon reduction is one driver, but not the primary driver. The primary driver is time to market. Developers used to take 45 days to get new servers, but in our virtualized private cloud environment, it takes just a couple of minutes.” It's not something unexpected, especially by
someone at a banking institution, but the guy from Citigroup is right. I can tell you stories how much cloud-computing helped us in our everyday work, but it is better to let Venelin share his experience, as he is the one doing the development. Venelin: Hi. We develop all kinds of software in our start-up. We are mainly focused on mobile programming for iOS, Android and lately Qt, but from time to time we also develop webapplications. Either way, as we are always short-handed, sometimes it takes Iliyan a lot of time to set up development and testing environments. Until one day he proposed that we create these “templates” of virtual instances which I can start and stop at will, and use them for testing. As I actually prefer Ubuntu, we created a container template, where I have almost everything setup beforehand. If I need it I just enable it inside the LXC container. If another developer needs a Linux instance where he can deploy and test, he just starts it up. No more waiting 2-3 days for a setup. I can say this brought up our efficiency a lot. And if by any chance we are short of resources on one server, we just bring down our container and move to another, less utilized machine. I no longer need to fight a few days with my MacOS X because my php doesn't support Postgre out-of-the-box. I now have it working in a container and can do with it what I want. Another very useful setup we have is on our newer servers, where we test mobile apps on top
of virtualized OSes. We test our Qt apps on MeeGo instances inside virtual machines. We also run x86 Android virtual machines. They do not provide the full hardware capabilities of real devices, but they are infinitely helpful with rough testing and benchmarking. As well as with full OSs, we can move them between servers, when we have too many virtual machines running and eating out our resources. We test and deploy fast. And we have far less dependence on handsets because of the cloud.
Slide 19: Iliyan: Thanks. What is next for open-source cloud is crossing over to the public side. Either offer your private cloud to the public and monetize your efforts or create a hybrid cloud of private and public instances. There are few projects that allow you to do such things. One of them is OpenStack. It is completely open-source and is a joined effort by NASA and Rackspace. There are about 130 companies on board. Among them Canonical, Dell, HP, Intel, SuSE, AMD and Cisco. What OpenStack does it offer you the ability to turn your private cloud into a public one. You can start offering Infrastructure as a Service and play with the big boys Amazon, Rackspace, HP, Oracle. The other software is OpenNebula. This one is the more interesting one, at least in our view. OpenNebula is also completely open-source just like OpenStack. It doesn't have this â€œindustry backingâ€? as OpenStack does, but what it has is a lot of prominent users in all types of industries. Just a few of them are CERN, China Mobile, Telefonica, KPMG and SARA. OpenNebula turns your private cloud in IaaS that you can use inside the corporation, and at the same time offer it outside your company. OpenNebula also allows seamless integration with Amazon EC2. In other words you can create this hybrid IaaS that runs both on your private clouds and on public clouds. OpenNebula has a libvirt driver that creates a layer that sits between the hypervisors, the EC2 cloud and libvirt. This way you can always run some less important instances in the public cloud, while still maintaining independence with your private cloud. Slide 22 Q&A