Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

The Evolution of Cloud Computing: How to plan for change
The Evolution of Cloud Computing: How to plan for change
The Evolution of Cloud Computing: How to plan for change
Ebook417 pages9 hours

The Evolution of Cloud Computing: How to plan for change

Rating: 0 out of 5 stars

()

Read preview

About this ebook

With the ability to upload files securely and access them from anywhere, cloud computing has been positioned as today's ideal IT platform. However this has been claimed before for many different IT architectural approaches, how is cloud different? Is what is being offered now an end point, or just the beginning of an evolution of how cloud is instantiated and used?

This book looks at how we have got where we are, what cloud is promising now, and how cloud is likely to evolve and change as the future unfolds. Readers will be better able to ensure that decisions made now will hold them in good stead for the future, and they will have a better understanding of how to use cloud to deliver the best outcome for their organisations.
LanguageEnglish
Release dateDec 19, 2017
ISBN9781780173603
The Evolution of Cloud Computing: How to plan for change

Related to The Evolution of Cloud Computing

Related ebooks

Computers For You

View More

Related articles

Reviews for The Evolution of Cloud Computing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The Evolution of Cloud Computing - Clive Longbottom

    PART 1

    LOOKING BACK

    Cloud computing in context

    1BACKGROUND

    On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

    Charles Babbage (‘the Father of Computing’) in Passages from the Life of a Philosopher, 1864

    It is interesting to see how far we have come in such a short time. Before we discuss where we are now, it can be instructive to see the weird but wonderful path that has been taken to get us to our current position. The history of electronic computing is not that long: indeed, much of it has occurred over just three or four human generations. By all means, miss out this chapter and move directly to where cloud computing really starts, in Chapter 2. However, reading this chapter will help to place into perspective how we have got here – and why that is important.

    LOOKING BACKWARD TO LOOK FORWARD

    That men do not learn very much from the lessons of history is the most important of all the lessons that history has to teach.

    Aldous Huxley in Collected Essays, 1958

    Excluding specialised electromechanical computational systems such as the German Zuse Z3, the British Enigma code-breaking Bombes and the Colossus of the Second World War, the first real fully electronic general-purpose computer is generally considered to be the US’s Electronic Numerical Integrator And Computer (ENIAC). First operated in 1946, by the time it was retired in 1955 it had grown to use 17,500 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5,000,000 hand-soldered joints, all in a space measuring 168 m². Compare this with Intel’s 2016 Broadwell-EP Xeon chip, which contains 7.2 billion transistors in a chip of 456 mm².

    Weighing in at around 27 tonnes and needing 150kW of electricity, ENIAC could compute a military projectile’s trajectory around 2,400 times faster than a human. Its longest continuous operating period without breaking down was less than five days.¹ It has to be noted, though, that a 1973 legal case found that the designers of ENIAC had seen a previous Atanasoff-Berry Computer and that ENIAC shared certain design and functional approaches.

    Meanwhile, in Manchester, UK, the first stored program computer was developed and ran its first program in 1948. The Small Scale Experimental Machine 1, developed at the Victoria University, was the first such machine and led to the development of the first commercially available stored program computer, the Ferranti Mark 1, launched in 1951.

    In the 70 years since ENIAC, the use of computers has exploded. The development and wide availability of transistors drove the digital computer market strongly in the 1950s, leading to IBM’s development of its original series 700 and 7000 machines. These were soon replaced by its first ‘true’ mainframe computer, the System/360. This then created a group of mainframe competitors that, as the market stabilised, became known as ‘IBM and the BUNCH’ (IBM along with Burroughs, UNIVAC, NCR, Control Data Corporation and Honeywell). A major plus-point for mainframes was that everything was in one place – and through the use of virtualisation, as launched in 1966 on the IBM System/360-67, multiple workloads could be run on the same platform, keeping resource utilisation at 80% or higher.

    However, mainframes were not suitable for all workloads, or for all budgets, and a new set of competitors began to grow up around smaller, cheaper systems that were within the reach of smaller organisations. These midicomputer vendors included companies such as DEC (Digital Equipment Corporation), Texas Instruments, Hewlett Packard (HP) and Data General, along with many others. These systems were good for single workloads: they could be tuned individually to carry a single workload, or, in many cases, several similar workloads. Utilisation levels were still reasonable but tended to be around half, or less, of those of mainframes.

    The battle was on. New mass-production integrated chip architectures, where the use of transistors is embedded into a single central processing unit (CPU), were built around CISC/RISC (complex and reduced instruction set computing) systems. Each of these systems used different operating systems, and system compatibility was completely disregarded.

    Up until this point, computers were generally accessed through either completely dumb or semi-dumb terminals. These were screen-based, textually focused devices, such as IBM’s 3270 and DEC’s VT100/200, which were the prime means of interfacing with the actual program and data that were permanently tied to the mainframe or midicomputer. Although prices were falling, these machines were still not within the reach of the mass of small and medium enterprises around the globe.

    THE PRICE WAR

    Technological innovation has dramatically lowered the cost of computing, making it possible for large numbers of consumers to own powerful new technologies at reasonably low prices.

    James Surowiecki (author of The Wisdom of Crowds) in The New Yorker, 2012

    The vendors continued to try to drive computing down to a price point where they could penetrate even more of the market. It was apparent that hobbyists and techno-geeks were already embracing computing in the home. Led by expensive and complex build-your-own kits such as the Altair 8800 and Apple I, Commodore launched its PET (Personal Electronic Transactor) in mid-1977 but suffered from production issues, which then allowed Apple to offer a pre-built computer for home use, the Apple II. This had colour graphics and expansion slots, but cost was an issue at £765/$1,300 (over £3,300/$5,000 now). However, costs were driven down until computers such as the Radio Shack TRS-80 came through a couple of months after the Apple II, managing to provide a complete system for under £350/$600. Then, Clive Sinclair launched the Sinclair ZX80 in 1980 at a cost of £99.95/$230, ready built. Although the machine was low-powered, it drove the emergence of a raft of low-cost home computers, including the highly popular BBC Micro, which launched in 1981, the same year as the IBM Personal Computer, or PC.

    Suddenly, computing power was outside the complete control of large organisations, and individuals had a means of writing, using and passing on programs. Although Olivetti had brought out a stand-alone desktop computer in 1965 called the Programma 101, it was not a big commercial success, and other attempts also failed due to the lack of standardisation that could be built into the machines. The fragility of the hardware and poor operating systems led to a lack of customers, who at this stage still did not fully understand the promise of computing for the masses. Companies had also attempted to bring out desktop machines, such as IBM’s SCAMP and Xerox’s Alto machine, the latter of which introduced the concept of the graphical user interface using windows, icons and a mouse with a screen pointer (which became known as the WIMP system, now commonly adopted by all major desktop operating systems). But heterogeneity was still holding everybody back; the lack of a standard to which developers could write applications meant that there was little opportunity to build and sell sufficient copies of any software to make back the time and investment in the development and associated costs. Unlike on the mainframe, where software licence costs could be in the millions of dollars, personal computer software had to be in the tens or hundreds of dollars, with a few programs possibly going into the thousands.

    THE RISE OF THE PC

    Computers in the future may … weigh only 1.5 tons.

    Popular Mechanics magazine, 1949

    It all changed with the IBM PC. After a set of serendipitous events, Microsoft’s founder, Bill Gates, found himself with an opportunity. IBM had been wanting to go with the existing CP/M (Control Program/Monitor, or latterly Control Program for Microcomputers) operating system for its new range of personal computers but had come up against various problems in gaining a licence to use it. Gates had been a key part of trying to broker a deal between IBM and CP/M’s owner, Digital Research, and he did not want IBM to go elsewhere. At this time, Microsoft was a vendor of programming language software, including BASIC, COBOL, FORTRAN and Pascal. Gates therefore needed a platform on which these could easily run, and CP/M was his operating system of choice. Seeing that the problems with Digital Research were threatening the deal between IBM and Microsoft, Gates took a friend’s home-built operating system (then known as QDOS – a quick and dirty operating system), combined it with work done by Seattle Computer Products on a fledgling operating system known as SCP-DOS (or 8-DOS) and took it to IBM. As part of this, Gates also got Tim Paterson to work for Microsoft; Paterson would become the prime mover behind the operating system that became widespread across personal computers.

    So was born MS-DOS (used originally by IBM as PC-DOS), and the age of the standardised personal computer (PC) came about. Once PC vendors started to settle on standardised hardware, such that any software that needed to make a call to the hardware could do so across a range of different PC manufacturers’ systems, software development took off in a major way. Hardware companies such as Compaq, Dell, Eagle and Osbourne brought out ‘IBM-compatible’ systems, and existing companies such as HP and Olivetti followed suit.

    The impact of the PC was rapid. With software being made available to emulate the dumb terminals, users could both run programs natively on a PC and access programs being run on mainframe and midicomputers. This seemed like nirvana, until organisations began to realise that data was now being spread across multiple storage systems, some directly attached to mainframes, some loosely attached to midicomputers and some inaccessible to the central IT function, as the data was tied to the individual’s

    Enjoying the preview?
    Page 1 of 1