Thursday 28 March 2024
Font Size
   
Thursday, 15 September 2011 14:55

Intel Lays Out Vision of 'Cloud 2015'

Rate this item
(0 votes)

The big cloud presentation at Tuesday’s IDF was a post-lunch session called, “Driving towards cloud 2015: a technology vision to meet the demands of cloud computing tomorrow” by Intel Senior Fellow Stephen S. Pawlowski. Pawlowski’s job at Intel is “pathfinding”—he’s one of the guys who looks at long-term trends and tries to help steer the company in the right direction. As a big-picture guy, Pawlowski’s presentation was necessarily pretty general, since he had a ton of ground to cover in laying out the trends and issues that Intel sees as shaping cloud computing between now and 2015.

(An aside: what is it with 2015? This is the third time that year has come up on this blog, and the second time on the first day of IDF. Even one of the Intel presenters remarked on it. Everyone is trying to look just a little over three years out, apparently.)

Like the Intel IT presenter mentioned below, Pawlowski opened up with a survey of attempts to define “cloud.” He then got into the meat of his presentation, which covered security and privacy, power efficiency, big data, the sensor explosion, heterogeneous processing, and more.

Security and privacy

Pawlowski started out by hammering a point that he made over and over again in his presentation: security and privacy are the two biggest obstacles to cloud adoption. And it was clear that he wasn’t just talking about obstacles to enterprise cloud adoption; he meant among consumers, as well. “Do you know where your data is? who can see it? who can modify it without a trace? who can aggregate, summarize, and embed it for purposes other than yours?” one of his slides asked.

“Security is my number one concern and fear,” Pawlowski told the audience.

In connection with this, he also talked a bit about the threat posed by elite, professional hackers, specifically Chinese hackers. Ok ok… he never actually mentioned China, but from his allusion to the Aurora attacks and the other things he said, it was pretty clear that he was speaking code for “Chinese hackers.”

It would be prohibitively costly to secure every last corner of the computing ecosystem from the sophisticated thread posed by determined, state-sponsored hacker groups, so Pawlowski says that Intel takes a segmented approach to security threats. The chipmaker categorizes threats as low risk (e.g. some violations of privacy), medium risk (e.g. stolen consumer credit cards), and high risk (e.g. state secrets), and then works to apply appropriate levels of security to each context. I have a feeling that we’re going to be hearing a lot more about this segmented approach as Intel starts to unveil more of what it has been working on with McAfee.

Power efficiency

Apart from security and privacy, the other overarching theme of Pawlowski’s presentation was power efficiency. Unlike the first two issues, power efficiency represents a hard, physical constraint on datacenter growth. Luckily, Intel is uniquely situated to address datacenter power efficiency.

Pawloski described Intel’s power efficiency focus as extending all the way from “grid to gate”—i.e., from the power grid down to the individual transistor. Because it has its fingers in every layer of the datacenter, Intel can optimize power within and across layers. This ability to optimize across layers is going to be very hard for competitors to match in the long-run, and I think it’s possible (not certain, but possible) that it’s going to give Intel a key edge in the datacenter race that will see the company through the next decade or two.

Growth trends and Big Data

Part of Pawlowski’s presentation focused on growth trends, and he cited some EU report which claimed that there will be 2.5 billion Internet users by 2015 (cf. IDC’s claim of 2.7 billion people online by then), and10 billion connected devices. He also sees a surge in storage requirements. (He had some stats for video uploaded to YouTube, which have become a standard part of any corporate presentation on growing bandwidth and storage needs.)

Pawlowski also mentioned a surge in sensors—GPS, accelerometers, gyroscopes, compasses, and said that, “Sensors worry me from a security POV, becasue if they have compute, communication, and storage they can be attacked….they can be a point of vulnerability.”

All of this, of course, means more real-time data to be analyzed, so Big Data made up another part of Pawlowski’s vision for cloud 2015. The future according to Pawlowski, and no doubt this is true, will feature a ton of real-time analytics from the great mass of sensors out there. Analysts will throw algorithms from the machine learning toolbox at a torrent of live data that is diverse, unstructured (no schema), uncurated, and has inconsistent syntax and semantics. Enabling this kind of processing is a major part of Intel’s cloud vision.

Heterogeneous processing

Another part of Intel’s cloud vision is heterogeneous processing—different kinds of cores doing different kinds of jobs. Pawlowski pointed out that there’s no “the” workload in the cloud; rather, there’s a variety of cloud workloads, and as such there will be a variety of types of hardware that run those workloads. There’s a place for small cores like Atom, a place for beefy cores like Sandy Bridge, and even a place for the GPU… at least, Intel hopes there’s a place for the GPU in the cloud.

I touched on this issue briefly in my Ars writeup of Intel’s latest Research Day. The company is deeply involved in finding ways to use GPU hardware in the cloud, and for the simple reason that they’d like to ship GPU cores as part of their Xeon line, as opposed to making a version of Xeon for cloud that is sans GPU. In other words, if you’re going to be in the business of making mass-market chips with a GPU on-die, then you get better economies of scale if you can sell those exact same chips into the datacenter. Right now, Intel is making Xeons without the GPU because there’s not much for the GPU to do in a server, but if you can get the GPU accelerating crypto and doing other useful work, then you can sell the same GPU-enabled parts into the datacenter that you sell on the desktop.

So heterogeneity is good for Intel, because it lets them exploit in the server market the economies of scale that they get from the consumer market with products like Atom and the GPU-enabled Core line. Fortunately, I think they’re right about heterogeneity in the cloud—some workloads will want an Atom (or an ARM part), and some will want a Xeon.

The client-aware cloud: fat clients, lock-in, and an edge vs. ARM

Speaking of things that Intel wants to see, the other big place where Intel talks is book is with the issue of client-aware cloud. The idea behind client-aware cloud is straightforward: clients will tell the cloud something about their capabilities (e.g. “I support this special kind of authentication”), and the cloud will respond appropriately for that particular client (e.g. “I see that you support this special kind of authentication, so instead of serving you the standard login that I serve to most clients, I’ll accept your special credentials and pass you through.”)

Intel likes client-aware for two reasons. First, Intel wants to keep selling “fat” clients, because “fat client” equals “lots of transistors and performance.” About three years ago, Intel started talking trash about thin client in the context of the start of the ARM wars. From an Ars piece I did at the time:

Let’s say that you’re Intel, and you spent $5.5 billion in capital expenditures in 2007, much of it on the 45nm transition, and all of it for the purpose of beating rivals at delivering performance-per-watt increases across a range of market segments that spans the computing spectrum from servers to ultraportable devices. What, then, are you supposed to think about Web 2.0, the resurgence of the thin client model, and the popular “cloud computing” notion that users should be able to do almost all of their work and play with nothing but a simple Web browser (maybe running on an ARM-powered web tablet)? Judging by the comments of some of the Intel folks in this past week’s Directions Symposium, the chipmaker thinks it stinks.

In the intervening years, Intel has toned down the anti-thin-client rhetoric just a bit, but the company is still looking to sell fully-featured clients—or “rich clients” as the chipmaker prefers to call them—of the kind that rival ARM can’t yet match.

Apart from the fact that Intel wants to see clients maintain a robust appetite for the transistors and features it supplies, there is another reason why Intel likes the client-aware cloud vision: lock-in.

Intel would love it if the most convenient and secure way for you to connect to an Intel-powered cloud is with an Intel-powered client. If Intel can build clouds that are vPro-aware and that work best with vPro-enabled clients, then you’ll have a real incentive to make sure that Intel—not ARM—is inside your smartphone, tablet, or laptop.

So, that’s the broad outline of Intel’s vision for the cloud in three years. What’s your take?

Authors:

French (Fr)English (United Kingdom)

Parmi nos clients

mobileporn