Skip to content →

Tag: Intel

Why VR Could be as Big as the Smartphone Revolution

Technology in the 1990s and early 2000s marched to the beat of an Intel-and-Microsoft-led drum.

microsoft_intel_logos.jpeg
via IT Portal

Intel would release new chips at a regular cadence: each cheaper, faster, and more energy efficient than the last. This would let Microsoft push out new, more performance-hungry software, which would, in turn, get customers to want Intel’s next, more awesome chip. Couple that virtuous cycle with the fact that millions of households were buying their first PCs and getting onto the Internet for the first time – and great opportunities were created to build businesses and products across software and hardware.

But, over time, that cycle broke down. By the mid-2000s, Intel’s technological progress bumped into the limits of what physics would allow with regards to chip performance and cost. Complacency from its enviable market share coupled with software bloat from its Windows and Office franchises had a similar effect on Microsoft. The result was that the Intel and Microsoft drum stopped beating as they became unable to give the mass market a compelling reason to upgrade to each subsequent generation of devices.

The result was a hollowing out of the hardware and semiconductor industries tied to the PC market that was only masked by the innovation stemming from the rise of the Internet and the dawn of a new technology cycle in the late 2000s in the form of Apple’s iPhone and its Android competitors: the smartphone.

steve-jobs.jpg
via Mashable

A new, but eerily familiar cycle began: like clockwork, Qualcomm, Samsung, and Apple (playing the part of Intel) would devise new, more awesome chips which would feed the creation of new performance-hungry software from Google and Apple (playing the part of Microsoft) which led to demand for the next generation of hardware. Just as with the PC cycle, new and lucrative software, hardware, and service businesses flourished.

104jne.jpg

But, just as with the PC cycle, the smartphone cycle is starting to show signs of maturity. Apple’s recent slower than expected growth has already been blamed on smartphone market saturation. Users are beginning to see each new generation of smartphone as marginal improvements. There are also eery parallels between the growing complaints over Apple software quality from even Apple fans and the position Microsoft was in near the end of the PC cycle.

While its too early to call the end for Apple and Google, history suggests that we will eventually enter a similar phase with smartphones that the PC industry experienced. This begs the question: what’s next? Many of the traditional answers to this question – connected cars, the “Internet of Things”, Wearables, Digital TVs – have not yet proven themselves to be truly mass market, nor have they shown the virtuous technology upgrade cycle that characterized the PC and smartphone industries.

This brings us to Virtual Reality. With VR, we have a new technology paradigm that can (potentially) appeal to the mass market (new types of games, new ways of doing work, new ways of experiencing the world, etc.). It also has a high bar for hardware performance that will benefit dramatically from advances in technology, not dissimilar from what we saw with the PC and smartphone.

oculus-1200x799.jpg
via Forbes

The ultimate proof will be whether or not a compelling ecosystem of VR software and services emerges to make this technology more of a mainstream “must-have” (something that, admittedly, the high price of the first generation Facebook/Oculus, HTC/Valve, and Microsoft products may hinder).

As a tech enthusiast, its easy to get excited. Not only is VR just frickin’ cool (it is!), its probably the first thing since the smartphone with the mass appeal and virtuous upgrade cycle that can bring about the huge flourishing of products and companies that makes tech so dynamic to be involved with.

Leave a Comment

No Digital Skyscrapers

A colleague of mine shared an interesting article by Sarah Lacy from tech site Pando Daily about the power of technology building the next set of “digital skyscrapers” – Lacy’s term for enduring, 100-year brands in/made possible by technology. On the one hand, I wholeheartedly agree with one of the big takeaways Lacy wants the reader to walk away with: that more entrepreneurs need to strive to make a big impact on the world and not settle for quick-and-easy payouts. That is, after all, why venture capitalists exist: to fund transformative ideas.

But, the premise of the article that I fundamentally disagreed with – and in fact, the very reason I’m interested in technology is that the ability to make transformative ideas means that I don’t think its possible to make “100-year digital skyscrapers”.

In fact, I genuinely hope its not possible. Frankly, if I felt it were, I wouldn’t be in technology, and certainly not in venture capital. To me, technology is exciting and disruptive because you can’t create long-standing skyscrapers. Sure, IBM and Intel have been around a while — but what they as companies do, what their brands mean, and their relative positions in the industry have radically changed. I just don’t believe the products we will care about or the companies we think are shaping the future ten years from now will be the same as the ones we are talking about today, nor were they the ones we talked about ten years ago, and they won’t be the same as the ones we talk about twenty years from now. I’ve done the 10 year comparison before to illustrate the rapid pace of Moore’s Law, but just to be illustrative again: remember, 10 years ago:

  • the iPhone (and Android) did not exist
  • Facebook did not exist (Zuckerberg had just started at Harvard)
  • Amazon had yet to make a single cent of profit
  • Intel thought Itanium was its future (something its basically given up on now)
  • Yahoo had just launched a dialup internet service (seriously)
  • The Human Genome Project had yet to be completed
  • Illumina (posterchild for next-generation DNA sequencing today) had just launched its first system product

And, you know what, I bet 10 years from now, I’ll be able to make a similar list. Technology is a brutal industry and it succeeds by continuously making itself obsolete. It’s why its exciting, and it’s why I don’t think and, in fact, I hope that no long-lasting digital skyscrapers emerge.

One Comment

Qualcomm Trying to Up its PR with Snapdragon Stadium

As a nerd and a VC, I’m very partial towards “enabling technologies” – the underlying technology that makes stuff tick. That’s one reason I’m so interested in semiconductors: much of the technology we see today has its origins in something that a chip or semiconductor product enabled. But, despite the key role they (and other enabling technologies) play in creating the products that we know and love, most people have no idea what “chips” or “semiconductors” are.

Part of that ignorance is deliberate – chip companies exist to help electronics/product companies, not steal the spotlight. The only exception to that rule that I can think of is Intel which has spent a fair amount over the years on its “Intel Inside” branding and the numerous Intel Inside commercials that have popped up.

While NVIDIA has been good at generating buzz amongst enthusiasts, I would maintain that no other semiconductor company has quite succeeded at matching Intel in terms of getting public brand awareness – an awareness that probably has helped Intel command a higher price point because the public thinks (whether wrongly or rightly) that computers with “Intel inside” are better.

Well Qualcomm looks like they want to upset that. Qualcomm make chips that go into mobile phones and tablets and has benefitted greatly from the rise in smartphones and tablets over the past few years, getting to the point where some might say they have a shot at being a real rival for Intel in terms of importance and reach. But for years, the most your typical non-techy person might have heard about them is the fact that they have the naming rights to San Diego’s Qualcomm Stadium – home of the San Diego Chargers and former home of the San Diego Padres.

Well, on December 16th, in what is probably a very interesting test by Qualcomm to see if they can boost the consumer awareness of the Snapdragon product line they’re aiming at the next-generation of mobile phones and tablets, Qualcomm announced it will rename Qualcomm Stadium to Snapdragon Stadium for 10 days (coinciding with the San Diego County Credit Union Poinsettia Bowl and Bridgepoint Education Holiday Bowl) – check out the pictures from the Qualcomm blog below!

dsc_8635_0

cropped

Will this work? Well, if the goal is to get millions of people to, overnight, buy phones with Snapdragon chips inside – the answer is probably a no. Running this sort of rebranding for only 10 days for games that aren’t the SuperBowl just won’t deliver the right PR boost. But, as a test to see if their consumer branding efforts raises consumer awareness about the chips that power their phones, and potentially demand for “those Snapdragon watchamacallits” in particular? This might be just what the doctor ordered.

I, for one, am hopeful that it does work – I’m a sucker for seeing enabling technologies and the companies behind them like Qualcomm and Intel get the credit they deserve for making our devices work better, and, frankly, having more people talk about the chips in their phones/tablets will push device manufacturers and chip companies to innovate faster.

(Image credit: Qualcomm blog)

Leave a Comment

Disruptive ARMada

I’ve mentioned before that one of the greatest things about being in the technology space is how quickly the lines of competition rapidly change.

image Take ARM, the upstart British chip company which licenses the chip technology which powers virtually all mobile phones today. Although they’ve traditionally been relegated to “dumb” chips because of their low cost and low power consumption, they’ve been riding a wave of disruptive innovation to move beyond just low cost “dumb” featurephones into more expensive smartphones and, potentially, into new low-power/always-connected netbooks.

More interestingly, though, is the recent revelation that ARM chips have been used in more than just low-power consumer-oriented devices, but also in production grade servers which can power websites, something which has traditionally been in the domain of more expensive chips by companies like AMD, Intel, and IBM.

And now, with:

  1. A large semiconductor company like Marvell officially announcing that they will be releasing a high-end ARM chip called the Armada 310 targeted at servers
  2. A new startup called Smooth Stone (its a David-vs-Goliath allusion) raising $48M (some of it from ARM itself!) to build ARM chips aimed at data center servers
  3. ARM announced their Cortex A15 processor, a multicore beast with support for hardware virtualization and physical address extensions — features you generally would only see in a server product
  4. Dell (which is the leading supplier of servers for this new generation of webscale data centers/customers) has revealed they have built test servers  running on ARM chips as proof-of-concept and look forward to the next generation of ARM chips

It makes you wonder if we’re on the verge of another disruption in the high-end computer market. Is ARM about to repeat what Intel/AMD chips did to the bulkier chips from IBM, HP, and Sun/Oracle?

(Image credit)

Leave a Comment

Don’t count your markets before they hatch

I was reading a very insightful analysis of the supercomputing industry over the past decade on scalability.org, when I stumbled on a chart which illustrates not only a pattern I see very often, but also a reason why you should always sandbag your forecasts if you’re betting on a new technology: your forecasts are almost always too optimistic.

Take Intel’s and HP’s huge gambit to push Itanium as the processor technology which would eventually replace all the other major processor technologies (i.e. SPARC, PowerPC, even Intel’s very own x86). Countless technology analysts and Intel/HP market researchers said Itanium would become the only game in town in computer processor technology – and with good reason. The technology that Itanium represented, in theory would’ve completely changed the processor technology game.

Yet, if we look at the progression of Itanium sales versus Itanium forecasts, we see a very different picture:

image

If anything, Intel/HP have now distanced themselves from Itanium – preferring to ship products based on Intel’s homegrown x86 technology, rather than the technology analysts had expected to storm the market.

Kind of embarrassing, isn’t it? The point isn’t to spit on HP and Intel’s faces (however much they deserve it), but to point out that new technologies are notoriously hard to predict, and to whatever extent is possible, companies everywhere should (a) never bet the farm on them and (b) watch what forecasts they’re making. They may come back to haunt them.

(Image credit)

Leave a Comment

HotChips 101

image This post is almost a week overdue thanks to a hectic work week. In any event, I spent last Monday and Tuesday immersed in the high performance chip world at the 2009 HotChips conference.

Now, full disclosure: I am not electrical engineer, nor was I even formally trained in computer science. At best, I can “understand” a technical presentation in a manner akin to how my high school biology teacher explained his “understanding” of the Chinese language: “I know enough to get in trouble.”
But despite all of that, I was given a rare look at a world that few non-engineers ever get to see, and yet it is one which has a dramatic impact on the technology sector given the importance of these cutting-edge chip technologies in computers, mobile phones, and consumer electronics.

And, here’s my business strategy/non-expert enthusiast view of six of the big highlights I took away from the conference and which best inform technology strategy:

  1. image We are 5-10 years behind on the software development technology needed to truly get performance power out of our new chips. Over the last decade, computer chip companies discovered that simply ramping up clock speeds (the Megahertz/Gigahertz number that everyone talks about when describing how fast a chip is) was not going to cut it as a way of improving computer performance (because of power consumption and heat issues). As a result, instead of making the cores (the processing engines) on a chip faster, chip companies like Intel resorted to adding more cores to each chip. The problem with this approach is that performance becomes highly dependent on software developers being able to create software which can figure out how to separate tasks across multiple cores and share resources effectively between them – something which is “one of the hardest if not the hardest systems challenge that we as an industry have ever face” (courtesy of UC Berkeley professor Dave Patterson). The result? Chip designers like Intel may innovate to the moon, but unless software techniques catch up, we won’t get to see any of that. Is it no wonder, then, that Intel bought multi-core software technology company RapidMind or that other chip designers like IBM and Sun are so heavily committed to creating software products to help developers make use of their chips? (Note: the image to the right is an Apple ad of an Intel bunny suit smoked by the PowerPC chip technology that they used to use)
  2. Computer performance may become more dependent on chip accelerator technologies. The traditional performance “engine” of a computer was the CPU, a product which has made the likes of Intel and IBM fabulously wealthy. But, the CPU is a general-purpose “engine” – a jack of all trades, but a master of none. In response to this, companies like NVIDIA, led by HotChips keynote speaker Jen-Hsun Huang, have begun pushing graphics chips (GPUs), traditionally used for gaming or editing movies, as specialized engines for computing power. I’ve discussed this a number of times over at the Bench Press blog, but the basic idea is that instead of using the jack-of-all-trades-and-master-of-none CPU, a system should use specialized chips to address specialized needs. Because a lot of computing power is burnt doing work that is heavy on the mathematical tasks that a GPU is suited to do, or the signal processing work that a digital signal processor might be better at, or the cryptography work that a cryptography accelerator is better suited for, this opens the doorway to the use of other chip technologies in our computers. NVIDIA’s GPU solution is one of the most mature, as they’ve spent a number of years developing a solution they call CUDA, but there was definitely a clear message: as the performance that we care about becomes more and more specialized (like graphics or number crunching or security), special chip accelerators will become more and more important.
    image
  3. Designing high-speed chips is now less and less about “chip speed” and more and more about memory and input/output. An interesting blog post by Gustavo Duarte highlighted something very fascinating to me: your CPU spends most of its time waiting for things to do. So much time, in fact, that the best way to speed up your chip is not to speed up your processing engine, but to speed up getting tasks into your chip’s processing cores. The biological analogy to this is something called a perfect enzyme – an enzyme that works so fast that its speed is limited by how quickly it can get ahold of things to work on. As a result, every chip presentation spent ~2/3 of the time talking about managing memory (where the chip stores the instructions it will work on) and managing how quickly instructions from the outside (like from your keyboard) get to the chip’s processing cores. In fact, one of the IBM POWER7 presentations spent almost the entire time discussing the POWER7’s use and management of embedded DRAM technology to speed up how quickly tasks can get to the processing cores.
  4. Moore’s Law may no longer be as generous as it used to be. I mentioned before that one of the big “facts of life” in the technology space is the ability of the next product to be cheaper, faster, and better than the last – something I attributed to Moore’s Law (an observation that chip technology doubles in capability every ~2 years). At HotChips, there was a imagefascinating panel discussing the future of Moore’s Law, mainly asking the question of (a) will Moore’s Law continue to deliver benefits and (b) what happens if it stops? The answers were not very uplifting. While there was a wide range of opinions on how much we’d be able to squeeze out of Moore’s Law going forward, there was broad consensus that the days of just letting Moore’s Law lower your costs, reduce your energy bill, and increase your performance simultaneously were over. The amount of money it costs to design next-generation chips has grown exponentially (one panelist cited a cost of $60 million just to start a new custom project), and the amount of money it costs to operate a semiconductor factory have skyrocketed into the billions. And, as one panelist put it, constantly riding the Moore’s Law technology wave has forced the industry to rely on “tricks” which reduced the delivery of all the benefits that Moore’s Law was typically able to bring about. The panelists warned that future chip innovations were going to be driven more and more by design and software rather than blindly following Moore’s Law and that unless new ways to develop chips emerged, the chip industry itself could find itself slowing its progress.
  5. Power management is top of mind. The second keynote speaker, EA Chief Creative Officer Richard Hilleman noted something which gave me significant pause. He said that in 2009, China will probably produce more electric cars in one year than have ever been produced in all of history. The impact to the electronics industry? It will soon be very hard to find and imagevery expensive to buy batteries. This, coupled with the desires of consumers everywhere to have longer battery lives for their computers, phones, and devices means that managing power consumption is critical for chip designers. In each presentation I watched, I saw the designers roll out a number of power management techniques – the most amusing of which was employed by IBM’s new POWER7 uber-chip. The POWER7 could implement four different low-power modes (so that the system could tune its power consumption), which were humorously named: doze, nap, sleep, and “Rip van Winkle”.
  6. Chip designers can no longer just build “the latest and greatest”. There used to be one playbook in the Silicon Valley – build what you did a year ago, but make it faster. That playbook is fast becoming irrelevant. No longer can Silicon Valley just count on people to buy bigger and faster computers to run the latest and greatest applications. Instead, people are choosing to buy cheaper computers to run Facebook and Gmail, which, while interesting and useful, no longer need the CPU or monitor with the greatest “digital horsepower.” EA’s Richard Hilleman noted that this trend was especially important in the gaming indimageustry. Where before, the gaming industry focused on hardcore gamers who spent hours and hours building their systems and playing immersive games, today, the industry is keen on building games with clever mechanics (e.g. a Guitar Hero or a game for the Nintendo Wii) for people with short attention spans who aren’t willing to spend hours holed up in front of their televisions. Instead of focusing on pure graphical horsepower, gaming companies today want to build games which can be social experiences (like World of Warcraft) or which can be played across many devices (like smartphones or over social networks). With stores like Gamestop on the rise, gaming companies can no longer count on just selling games, they need to think up how to sell “virtual goods” (like upgrades to your character/weapons) or in-game advertising (a Coke billboard in your game?) or encourage users to subscribe. What this all means is that, to stay relevant, technology companies can no longer just gamble on their ability to make yesterday’s product faster, they have to make them better too.

There was a lot more that happened at HotChips than I can describe here (and I skipped over a lot of the more techy details), but those were six of the most interesting messages that I left the conference with, and I am wondering if I can get my firm to pay for another trip next year!

Oh, and just to brag, while at HotChips, I got to check out a demo of the potential blockbuster game Batman: Arkham Asylum while checking out NVIDIA’s 3D Vision product! And I have to say, I’m very impressed by both products – and am now very tempted by NVIDIA’s Buy a GeForce card, get Batman: Arkham Asylum free offer.

(Image credit: Intel bunny smoked ad) (Image credit: GPU computing power) (Image Credit: brick wall) (Image – Rip Van Winkle) (Image – World of Warcraft box art)

One Comment

More Made in Taiwan

image It’s been a while since I visited the topic of Taiwan’s pivotal role in the global technology supply chain. So, it’s long overdue for some not-so-shameless plugging of news involving my favorite island country’s technology industry and the impact they’ve had on the technology space:

Hopefully a small taste of the reason why so many tech analysts watch the Taiwanese industry so carefully.

(Image Credit)

One Comment

Playing with Monopoly

imageWith the recent challenges to Google’s purchase of Doubleclick, Microsoft’s endless courtship of Yahoo, and the filing of more papers in the upcoming Intel/AMD case, the question of “why should the government break up monopolies?” becomes much more relevant.

This is a question that very few people ask, even though it is oftentimes taken for granted that the government should indeed engage in anti-trust activity.

The logic behind modern anti-trust efforts goes back to the era of the railroad, steel, and oil trusts of the Gilded Age, when massive and abusive firms engaged in collusion and anti-competitive behavior to fix prices and prevent new entrants from entering into the marketplace. As any economist will be quick to point out, one of the secrets to the success behind a market economy is competition – whether it be workers competing with workers to be more productive or firms competing with firms to deliver better and cheaper products to their customers. When you remove competition, there is no longer any pressing reason to guarantee quality or cost.

So – we should regulate all monopolies, right? Unfortunately, it’s not that simple. The logic that competition is always good is greatly oversimplified, as it glosses over 2 key things:

  1. It’s very difficult to determine what is a monopoly and what isn’t.
  2. Technology-driven industries oftentimes require large players to deliver value to the customer.

What’s a Monopoly?

image

While we would all love monopolies to have clear and distinguishable characteristics – maybe an evil looking man dressed in all black laughing sinisterly as his diabolic plans destroy a pre-school? – the fact of the matter is that it is very difficult for an economist/businessperson to really tell what counts as a monopoly and what doesn’t, for four key reasons:

  1. Many of the complaints and lawsuits brought against “monopolies” are brought on by competitors. Who is trying to sue Intel? AMD. Who complained loudly about Microsoft’s bundling of Internet Explorer into Windows? Netscape.
  2. “Market share” has no meaning. In a sense, there are a lot of monopolies out there. Orson Scott Card has a 100% market share in books pertaining to the Ender’s Game series. McDonald’s has a 100% market share in Big Macs. This may seem like I’m just playing with semantics, but this is actually a fairly serious problem in the business world. I would even venture that a majority of growth strategy consulting projects are due to the client being unable to correctly define the relevant market and relevant market share.
  3. What’s “monopoly-like” may just be good business. Some have argued that Microsoft and Intel are monopolies in that they are bullies to their customers, aggressively pushing PC manufacturers to only purchase from them. But, what is harder to discern is how this is any different from a company that offers aggressive volume discounts? Or that hires the best-trained negotiators? Or that knows how to produce the best products and demands a high price for them? Sure, Google is probably “forcing” its customers to pay more to advertise on Google, but if Google’s services and reach are the best, what’s wrong with that?
  4. “Victims” of monopolies may just be lousy at managing their business. AMD may argue that Intel’s monopoly power is hurting their bottom line, but at the end of the day, Intel isn’t directly to blame for AMD’s product roadmap mishaps, or its disastrous acquisition of ATI. Google isn’t directly to blame for Microsoft’s inability to compete online.

Big can be good?

This may come as a shock, but there are certain cases where large monolithic entities are actually good for the consumer. Most of these lie around technological innovation. Here are a few examples:

  • Semiconductors – The digital revolution would not have been possible without the fast, power-efficient, and tiny chips which act as their brains. What is not oftentimes understood, however, is the immense cost and time required to build new chips. It takes massive companies with huge budgets to build tomorrow’s chips. It’s for this reason that most chip companies don’t run their own manufacturing centers and are steadily slowing down their R&D/product roadmaps as it becomes increasingly costly to design and build out chips.
  • Pharmaceuticals – Just as with semiconductors, it is very costly, time-consuming, and risky to do drug development. Few of today’s biotech startups can actually even bring a drug to market — oftentimes hoping to stay alive just long enough to partner with or be bought by a larger company with the money and experience to jump through the necessary hoops to take a drug from benchside to bedside.
  • Software platforms – Everybody has a bone to pick with Microsoft’s shoddy Windows product line. But what few people recognize is how much the software industry benefited from the role that Microsoft played early on in the computer revolution. By quickly becoming the dominant operating system, Microsoft’s products made it easier for software companies to reach wide audiences. Instead of designing 20 versions of every application/game to run on 20 OS’s, Microsoft made it easy to only have to design one. This, of course, isn’t saying that we need a OS monopoly right now to build a software industry, but it is fair to say that Microsoft’s early “monopoly” was a boon to the technology industry.

The problem with today’s anti-trust rules and regulations is that they are legal rules and regulations, not economic ones. In that way, while they may protect against many of the abuses of the Gilded Age (by preventing firms from getting 64.585% market share and preventing them from monopolistic action 1 through 27), they also unfortunately act as deterrents to innovation and good business practice.

Instead, regulators need to try to take a broader, more holistic view of anti-trust. Instead of market share litmus tests and paying attention to sob stories from the Netscapes of the world, regulators need to really focus on first, determining if the offender in question is acting harmfully anticompetitive at all, and second if there is credible economic value in the institutions they seek to regulate.

Leave a Comment

Redacted

Context:
AMD wants to sue Intel for anti-competitive behavior. AMD’s argument is that Intel has used its market leadership to exert unfair influence on customers to force AMD out of the market. Intel retorts that it is simply making better products.

To bring this dispute to trial, AMD needs to file certain papers with the courts. Due to the sensitive nature of the subject (internal Intel and OEM documents may reveal technical or strategic secrets), much of the paperwork has been blacked out (or “redacted”, if you want to use that smart lawyer vocabulary).

As a result, this wonderfully clear excerpt was made public

Indeed. That is clearly what the lawsuit is all about.
(Source: The Register)

One Comment
%d bloggers like this: