Skip to content →

Tag: servers

My Takeaways from GTC 2012

If you’ve ever taken a quick look at the Bench Press blog that I post to, you’ll notice quite a few posts that talk about the promise of using graphics chips (GPUs) like the kind NVIDIA and AMD make for gamers for scientific research and high-performance computing. Well, last Wednesday, I had a chance to enter the Mecca of GPU computing: the GPU Technology Conference.

tmp_image_1337198959501

If it sounds super geeky, it’s because it is :-). But, in all seriousness, it was a great opportunity to see what researchers and interesting companies were doing with the huge amount of computational power that is embedded inside GPUs as well as see some of NVIDIA’s latest and greatest technology demo’s.

So, without further ado, here are some of my reactions after attending:

  • NVIDIA really should just rename this conference the “NVIDIA Technology Conference”. NVIDIA CEO Jen-Hsun Huang gave the keynote, the conference itself is organized and sponsored by NVIDIA employees, NVIDIA has a strong lead in the ecosystem in terms of applying the GPU to things other than graphics, and most of the non-computing demos were NVIDIA technologies leveraged elsewhere. I understand that they want to brand this as a broader ecosystem play, but let’s be real: this is like Intel calling their “Intel Developer Forum” the “CPU Technology Forum” – lets call it what it is, ok? 🙂
  • Lots of cool uses for the technology, but we definitely haven’t reached the point where the technology is truly “mainstream.” On the one hand, I was blown away by the abundance of researchers and companies showcasing interesting applications for GPU technology. The poster area was full of interesting uses of the GPU in life science, social sciences, mathematical theory/computer science, financial analysis, geological science, astrophysics, etc. The exhibit hall was full of companies pitching hardware design and software consulting services and organizations showing off sophisticated calculations and visualizations that they weren’t able to do before. These are great wins for NVIDIA – they have found an additional driver of demand for their products beyond high-end gaming. But, this makeup of attendees should be alarming to NVIDIA – this means that the applications for the technology so far are fundamentally niche-y, not mainstream. This isn’t to say they aren’t valuable (clearly many financial firms are willing to pay almost anything for a little bit more quantitative power to do better trades), but the real explosive potential, in my mind, is the promise of having “supercomputers inside every graphics chip” – that’s a deep democratization of computing power that is not realized if the main users are only at the highest end of financial services and research, and I think NVIDIA needs to help the ecosystem find ways to get there if they want to turn their leadership position in alternative uses of the GPU into a meaningful and differentiated business driver.
  • NVIDIA made a big, risky bet on enabling virtualization technology. In his keynote, NVIDIA CEO Jen-Hsun Huang announced with great fanfare (as is usually his style) that he has made virtualization – this has made it possible to allow multiple users to share the same graphics card over the internet. Why is this potentially a big risk? Because, it means if you want to have good graphics performance, you no longer have to buy an expensive graphics card for your computer – you can simply plug into a graphics card that’s hosted somewhere else on the internet whether it be for gaming (using a service like GaiKai or OnLive) or for virtual desktops (where all of the hard work is done by a server and you’re just seeing the screen image much like you would watch a video on Netflix or YouTube) or in plugging into remote rendering services (if you work in digital movie editing). So why do it? I think NVIDIA likely sees a large opportunity in selling graphics chips which have , to date, been mostly a PC-thing, into servers that are now being built and teed up to do online gaming, online rendering, and virtual desktops. I think this is also motivated by the fact that the most mainstream and novel uses of GPU technology has been about putting GPU power onto “the cloud” (hosted somewhere on the internet). GaiKai wants to use this for gaming, Elemental wants to use this to help deliver videos to internet video viewers, rendering farms want to use this so that movie studios don’t need to buy high-end workstations for all their editing/special effects guys.
  • NVIDIA wants to be more than graphics-only. At the conference, three things jumped out at me as not being quite congruent with the rest of the conference. The first was that there were quite a few booths showing off people using Android tablets powered by NVIDIA’s Tegra chips to play high-end games. Second,  NVIDIA proudly showed off one of those new Tesla cars with their graphical touchscreen driven user interface inside (also powered by NVIDIA’s Tegra chips).
    2012-05-16 19.04.39Third, this was kind of hidden away in a random booth, but a company called SECO that builds development boards showed off a nifty board combining NVIDIA’s Tegra chips with its high-end graphics cards to build something they called the CARMA Kit – a low power high performance computing beast.2012-05-16 19.16.09 
    While NVIDIA has talked before about its plans with “Project Denver” to build a chip that can displace Intel’s hold on computer CPUs – this shows they’re trying to turn that from vision into reality – instead of just being the graphics card inside a game console, they’re making tablets which can play games, they’re making the processor that runs the operating system for a car, and they’re finding ways to take their less powerful Tegra processor and pair it up with a little GPU-supercomputer action.

If its not apparent, I had a blast and look forward to seeing more from the ecosystem!

2 Comments

Disruptive ARMada

I’ve mentioned before that one of the greatest things about being in the technology space is how quickly the lines of competition rapidly change.

image Take ARM, the upstart British chip company which licenses the chip technology which powers virtually all mobile phones today. Although they’ve traditionally been relegated to “dumb” chips because of their low cost and low power consumption, they’ve been riding a wave of disruptive innovation to move beyond just low cost “dumb” featurephones into more expensive smartphones and, potentially, into new low-power/always-connected netbooks.

More interestingly, though, is the recent revelation that ARM chips have been used in more than just low-power consumer-oriented devices, but also in production grade servers which can power websites, something which has traditionally been in the domain of more expensive chips by companies like AMD, Intel, and IBM.

And now, with:

  1. A large semiconductor company like Marvell officially announcing that they will be releasing a high-end ARM chip called the Armada 310 targeted at servers
  2. A new startup called Smooth Stone (its a David-vs-Goliath allusion) raising $48M (some of it from ARM itself!) to build ARM chips aimed at data center servers
  3. ARM announced their Cortex A15 processor, a multicore beast with support for hardware virtualization and physical address extensions — features you generally would only see in a server product
  4. Dell (which is the leading supplier of servers for this new generation of webscale data centers/customers) has revealed they have built test servers  running on ARM chips as proof-of-concept and look forward to the next generation of ARM chips

It makes you wonder if we’re on the verge of another disruption in the high-end computer market. Is ARM about to repeat what Intel/AMD chips did to the bulkier chips from IBM, HP, and Sun/Oracle?

(Image credit)

Leave a Comment

Linux: Go Custom or Go Home

In a post I wrote a few weeks ago about why I prefer the Google approach to Apple’s, I briefly touched on what I thought was one of the most powerful aspects of Android, and something I don’t think is covered enough when people discuss the iPhone vs Android battle:

With Google[’s open platform strategy], you enable many suppliers (Samsung, HTC, and Motorola for starters in the high-end Android device world, Sony and Logitech in Google TV) to compete with one another and offer their own variations on hardware, software, services, and silicon. This allows companies like Cisco to create a tablet focused on enterprise needs like the Cius using Android, something which the more restrictive nature of Apple’s development platform makes impossible (unless Apple creates its own), or researchers at the MIT Media lab to create an interesting telemedicine optometry solution.

imageTo me, the most compelling reason to favor a Linux/Android approach is this customizability. Too often, I see people in the Linux/Android community focus on the lack of software licensing costs or emphasize a high-end feature or the ability to emulate some Windows/Mac OS/iOS feature.

But, while those things are important, the real power of Android/Linux is to go where Microsoft and Apple cannot. As wealthy as Microsoft and Apple are, even they can’t possibly create solutions for every single device and use case. iOS may work well for a general phone/tablet like the iPhone and iPad, but what about phones targeted for the visually impaired? What about tablets which can do home automation? Windows might work great for a standard office computer, but what about the needs of scientists? Or students? The simple fact of the matter is neither company has the resources to chase down every single use case and, even if they did, many of these use cases are too niche for them to ever justify investment.

Linux/Android, on the other hand? The open source nature allows for customization (which others can then borrow for still other forms of customization) to meet a market’s (or partner’s) needs. The lack of software licensing costs means that the sales needed to justify an investment goes down. Take some recent, relatively high-profile examples:

Now, none of these are silver bullets which will drive 100% Linux adoption – but they convey the power of the open platform approach. Which leads me to this, potentially provocative conclusion: the real opportunity for Android/Linux (and the real chance to win) is not as a replacement for a generic Windows or Mac OS install, but as a path to highly customized applications.

Now I can already hear the Apple/GNOME contingent disagreeing with me because of the importance of user experience. And, don’t get me wrong, user experience is important and the community does need to work on it (I still marvel that the Android Google Maps application is slower than the iPhone’s or my inability to replace Excel/Powerpoint/other apps with OpenOffice/Wine), but I would say the war against the Microsoft/Apple user experience is better fought by focusing on use-case customization rather than trying to beat a well-funded, centrally managed effort.

Consider:

  1. Would you use iOS as the software for industrial automation? Or to run a web server? No. As beautiful and easy-to-use as the iOS design is, because its not built as a real-time operating system or built for web server use, it won’t compete along those dimensions.
  2. How does Apple develop products with such high quality? Its simple: focus on a few things. An Android/Linux setup should not try to be the same thing to all applications (although some of the underlying systems software can be). Instead, different Android/Linux vendors should focus on customizing their distributions for specific use-cases. For example, a phone guy should gut the operating system of anything that’s not needed for a phone and spend time building phone-specific capabilities.

The funny thing is the market has already proven this. Where is Linux currently the strongest? I believe its penetration is highest in three domains: smartphones, servers, and embedded systems. Ignoring smartphones (where Android’s leadership is a big win for Linux) which could be a special case, the other two applications are not particularly sexy or consumer-facing, but they are very educational examples. In the case of servers, the Linux community’s (geeky) focus on high-end features made it a natural fit for servers. Embedded systems have heavily used Linux because of the ability to customize the platform in the way that the silicon vendor or solution vendor wants.

image

Of course, high levels of customization can introduce fragmentation. This is a legitimate problem wherever software compatibility is important (think computers and smartphones), and, to some extent, the Android smartphone ecosystem is facing this as more and more devices and phone manufacturer customizations (Samsung, HTC, and Motorola put out fairly different devices). But, I think this is a risk that can be managed. First, a strong community and support for industry standards can help limit issues with fragmentation. Take the World Wide Web. The same website can work on MacOS and Windows because the HTML is a standard that browsers adhere to — and the strength of the web standards and development community help to reduce unnecessary fragmentation and provide support for developers where such fragmentation exists. Secondly, the open source nature of Linux/Android projects means that customizations can be more easily shared between development teams and that new projects can draft off of old projects. This doesn’t mean that they become carbon copies of one another, but it helps to spread good customizations farther, helping to control some of the fragmentation problems. Lastly, and this may be a cop-out answer, but I believe universal compatibility between Linux-based products is unnecessary. Why does there have to be universal compatibility between a tablet, a server, and a low-end microcontroller? Or, for that matter, between a low-end feature phone and a high-end smartphone? So long as the customizations are purpose-driven, the incompatibilities should not jeopardize the quality of the final product, and in fact, may enhance it.

Given all this, in my mind, the Android/Linux community need to think of better ways to target customizations. I think its the best shot they have at beating out the larger and less nimble companies which make up their competition, and of living up to its full potential as the widely used open source operating system it can be.

(Comic credit – XKCD) (Image credit)

Leave a Comment

Keep your enemies closer

One of the most interesting things about technology strategy is that the lines of competition between different businesses is always blurry. Don’t believe me? Ask yourself this, would anyone 10 years ago have predicted that:

I’m betting not too many people saw these coming. Well, a short while ago, the New York Times Tech Blog decided to chart some of this out, highlighting how the boundaries between some of the big tech giants out there (Google, Microsoft, Apple, and Yahoo) are blurring:

image

Its an oversimplification of the complexity and the economics of each of these business moves, but its still a very useful depiction of how tech companies wage war: they keep their enemies so close that they eventually imitate their business models.

(Chart credit)

4 Comments
%d bloggers like this: