Skip to content →

Tag: semiconductor

Why VR Could be as Big as the Smartphone Revolution

Technology in the 1990s and early 2000s marched to the beat of an Intel-and-Microsoft-led drum.

microsoft_intel_logos.jpeg
via IT Portal

Intel would release new chips at a regular cadence: each cheaper, faster, and more energy efficient than the last. This would let Microsoft push out new, more performance-hungry software, which would, in turn, get customers to want Intel’s next, more awesome chip. Couple that virtuous cycle with the fact that millions of households were buying their first PCs and getting onto the Internet for the first time – and great opportunities were created to build businesses and products across software and hardware.

But, over time, that cycle broke down. By the mid-2000s, Intel’s technological progress bumped into the limits of what physics would allow with regards to chip performance and cost. Complacency from its enviable market share coupled with software bloat from its Windows and Office franchises had a similar effect on Microsoft. The result was that the Intel and Microsoft drum stopped beating as they became unable to give the mass market a compelling reason to upgrade to each subsequent generation of devices.

The result was a hollowing out of the hardware and semiconductor industries tied to the PC market that was only masked by the innovation stemming from the rise of the Internet and the dawn of a new technology cycle in the late 2000s in the form of Apple’s iPhone and its Android competitors: the smartphone.

steve-jobs.jpg
via Mashable

A new, but eerily familiar cycle began: like clockwork, Qualcomm, Samsung, and Apple (playing the part of Intel) would devise new, more awesome chips which would feed the creation of new performance-hungry software from Google and Apple (playing the part of Microsoft) which led to demand for the next generation of hardware. Just as with the PC cycle, new and lucrative software, hardware, and service businesses flourished.

104jne.jpg

But, just as with the PC cycle, the smartphone cycle is starting to show signs of maturity. Apple’s recent slower than expected growth has already been blamed on smartphone market saturation. Users are beginning to see each new generation of smartphone as marginal improvements. There are also eery parallels between the growing complaints over Apple software quality from even Apple fans and the position Microsoft was in near the end of the PC cycle.

While its too early to call the end for Apple and Google, history suggests that we will eventually enter a similar phase with smartphones that the PC industry experienced. This begs the question: what’s next? Many of the traditional answers to this question – connected cars, the “Internet of Things”, Wearables, Digital TVs – have not yet proven themselves to be truly mass market, nor have they shown the virtuous technology upgrade cycle that characterized the PC and smartphone industries.

This brings us to Virtual Reality. With VR, we have a new technology paradigm that can (potentially) appeal to the mass market (new types of games, new ways of doing work, new ways of experiencing the world, etc.). It also has a high bar for hardware performance that will benefit dramatically from advances in technology, not dissimilar from what we saw with the PC and smartphone.

oculus-1200x799.jpg
via Forbes

The ultimate proof will be whether or not a compelling ecosystem of VR software and services emerges to make this technology more of a mainstream “must-have” (something that, admittedly, the high price of the first generation Facebook/Oculus, HTC/Valve, and Microsoft products may hinder).

As a tech enthusiast, its easy to get excited. Not only is VR just frickin’ cool (it is!), its probably the first thing since the smartphone with the mass appeal and virtuous upgrade cycle that can bring about the huge flourishing of products and companies that makes tech so dynamic to be involved with.

Leave a Comment

Qualcomm Trying to Up its PR with Snapdragon Stadium

As a nerd and a VC, I’m very partial towards “enabling technologies” – the underlying technology that makes stuff tick. That’s one reason I’m so interested in semiconductors: much of the technology we see today has its origins in something that a chip or semiconductor product enabled. But, despite the key role they (and other enabling technologies) play in creating the products that we know and love, most people have no idea what “chips” or “semiconductors” are.

Part of that ignorance is deliberate – chip companies exist to help electronics/product companies, not steal the spotlight. The only exception to that rule that I can think of is Intel which has spent a fair amount over the years on its “Intel Inside” branding and the numerous Intel Inside commercials that have popped up.

While NVIDIA has been good at generating buzz amongst enthusiasts, I would maintain that no other semiconductor company has quite succeeded at matching Intel in terms of getting public brand awareness – an awareness that probably has helped Intel command a higher price point because the public thinks (whether wrongly or rightly) that computers with “Intel inside” are better.

Well Qualcomm looks like they want to upset that. Qualcomm make chips that go into mobile phones and tablets and has benefitted greatly from the rise in smartphones and tablets over the past few years, getting to the point where some might say they have a shot at being a real rival for Intel in terms of importance and reach. But for years, the most your typical non-techy person might have heard about them is the fact that they have the naming rights to San Diego’s Qualcomm Stadium – home of the San Diego Chargers and former home of the San Diego Padres.

Well, on December 16th, in what is probably a very interesting test by Qualcomm to see if they can boost the consumer awareness of the Snapdragon product line they’re aiming at the next-generation of mobile phones and tablets, Qualcomm announced it will rename Qualcomm Stadium to Snapdragon Stadium for 10 days (coinciding with the San Diego County Credit Union Poinsettia Bowl and Bridgepoint Education Holiday Bowl) – check out the pictures from the Qualcomm blog below!

dsc_8635_0

cropped

Will this work? Well, if the goal is to get millions of people to, overnight, buy phones with Snapdragon chips inside – the answer is probably a no. Running this sort of rebranding for only 10 days for games that aren’t the SuperBowl just won’t deliver the right PR boost. But, as a test to see if their consumer branding efforts raises consumer awareness about the chips that power their phones, and potentially demand for “those Snapdragon watchamacallits” in particular? This might be just what the doctor ordered.

I, for one, am hopeful that it does work – I’m a sucker for seeing enabling technologies and the companies behind them like Qualcomm and Intel get the credit they deserve for making our devices work better, and, frankly, having more people talk about the chips in their phones/tablets will push device manufacturers and chip companies to innovate faster.

(Image credit: Qualcomm blog)

Leave a Comment

The Marketing Glory of NVIDIA’s Codenames

This is an old tidbit, but nevertheless a good one that has (somehow) never made it to my blog. I’ve mentioned before the private equity consulting world’s penchant for silly project names, but while code names are not rare in the corporate world, more often than not, the names don’t tend to be dull and not be of much use for a company. NVIDIA’s code names, however, are pure marketing glory.

Take NVIDIA’s high performance computing product roadmap (below) – these are products that use the graphics processing capabilities of NVIDIA’s high-end GPUs and turn them into smaller, cheaper, and more power-efficient supercomputing engines which scientists and researchers can use to crunch numbers (check out entries from the Bench Press blog for an idea of what researchers have been able to do with them). How does NVIDIA describe its future roadmap? It uses the names of famous scientists to describe its technology roadmap: Tesla (the great American electrical engineer who helped bring us AC power), Fermi (“the father of the Atomic Bomb”), Kepler (one of the first astronomers to apply physics to astronomy), and Maxwell (the physicist who helped show that electrical, magnetic, and optical phenomena were all linked).

cudagpuroadmap

Who wouldn’t want to do some “high power” research (pun intended) with Maxwell? 🙂

But, what really takes the cake for me are the codenames NVIDIA uses for its smartphone/tablet chips: its Tegra line of products. Instead of scientists, he uses, well, comic book characters (now you know why I love them, right?) :-). For release at the end of this year? Kal-El, or for the uninitiated, that’s the alien name for Superman. After that? Wayne, as in the alter ego for Batman. Then, Logan, as in the name for the X-men Wolverine. And then Stark, as in the alter ego for Iron Man.

Tegra_MWC_Update1

Everybody wants a little Iron Man in their tablet :-).

And, now I know what I’ll name my future secret projects!

(Image credit – CUDA GPU Roadmap) (Image credit – Tegra Roadmap)

One Comment

Disruptive ARMada

I’ve mentioned before that one of the greatest things about being in the technology space is how quickly the lines of competition rapidly change.

image Take ARM, the upstart British chip company which licenses the chip technology which powers virtually all mobile phones today. Although they’ve traditionally been relegated to “dumb” chips because of their low cost and low power consumption, they’ve been riding a wave of disruptive innovation to move beyond just low cost “dumb” featurephones into more expensive smartphones and, potentially, into new low-power/always-connected netbooks.

More interestingly, though, is the recent revelation that ARM chips have been used in more than just low-power consumer-oriented devices, but also in production grade servers which can power websites, something which has traditionally been in the domain of more expensive chips by companies like AMD, Intel, and IBM.

And now, with:

  1. A large semiconductor company like Marvell officially announcing that they will be releasing a high-end ARM chip called the Armada 310 targeted at servers
  2. A new startup called Smooth Stone (its a David-vs-Goliath allusion) raising $48M (some of it from ARM itself!) to build ARM chips aimed at data center servers
  3. ARM announced their Cortex A15 processor, a multicore beast with support for hardware virtualization and physical address extensions — features you generally would only see in a server product
  4. Dell (which is the leading supplier of servers for this new generation of webscale data centers/customers) has revealed they have built test servers  running on ARM chips as proof-of-concept and look forward to the next generation of ARM chips

It makes you wonder if we’re on the verge of another disruption in the high-end computer market. Is ARM about to repeat what Intel/AMD chips did to the bulkier chips from IBM, HP, and Sun/Oracle?

(Image credit)

Leave a Comment

Apple TV Disassembled

My good friend Joe and I spent a little amount of time last week disassembling an Apple TV, on a personal level to both take a look inside as well as to get a sense of what folks at companies like iSuppli and Portelligent do when they do their teardowns. I was also asked to take a look at how prominent/visible a chip company in the portfolio of my venture capital employer is inside the Apple TV.

I apologize for the poor photo quality (I wasn’t originally planning on posting these and I just wanted to document how we took it apart so that I knew how to put it back together). But, if you bear with me, here’s a picture of the Apple TV itself before we “conducted open heart surgery” (it fits in the palm of your hand!):

2010-10-26_20-39-46_5

Here is the what it looks like after we pry off the cover (with a flathead screwdriver or a paint wedge thing) – notice how thick the edges of the device are. This is important as I have nothing but sympathy for the poor engineers who had to design the infrared sensor and “blaster” for the remote such that it was powerful enough to penetrate that wall (but cheap enough/energy efficient enough for Apple to include it).

2010-10-26_20-43-55_652

I’m not exactly sure what the pink is, but my guess based on how “squishy” it was, is that it is some sort of shock absorber to help protect the device. Unscrewing the outermost a set of screws (two of which are hidden under the shock absorber), we finally get at the circuit board at the heart of the device:

2010-10-26_20-47-15_465

Using tweezers, we removed one of the connectors (I assumed it linked the chips on the board with the power supply) allowing us to detach the board from the enclosure:

2010-10-26_20-49-26_699

2010-10-26_20-54-17_591

We then had to remove the pesky electromagnetic shield (the metallic cover for most of the board), unveiling the chips inside (unfortunately, I didn’t take a picture of the opposite side of the board where you would actually see the Apple A4 chip):

2010-10-26_21-01-45_749

To give a little perspective on the size of those chips, here is the board next to the original device (which, by the way, fits in the palm of your hand):

2010-10-26_21-02-13_296

Cool, isn’t it? (At least Joe and I thought so Smile)

As for reflections on the process:

  • It’s a lot simpler than you would expect. Granted, we didn’t tear down a mobile phone (which is sealed somewhat more securely – although Joe and I might for kicks someday Smile), but at the end of the day, much of the “magic” is not in the hardware packaging, but in software, in the chips, or in the specialized hardware (antenna, LCD).
  • With that said, it’d probably be pretty difficult to tear down a device without someone knowing. The magic may not be in the packaging, but ODMs/EMSs like Foxconn have built a solid business around precision placement and sealing, and human hands are unlikely to have the same precision (or be able to remove/replace an EMI shield without deforming it Smile).
  • Given how simple this is, I personally believe that no tech analyst worth his or her pay should be allowed to go on without either doing their own teardown or buying a teardown from someone else. It’s a very simple way to understand how chip companies will do (just look at the board to see if they are getting sales!), and it’s a great way to get an understanding of what the manufacturing cost, design process, and technological capabilities of a device are.

 

4 Comments

Replicating Taiwan’s Success

I’m always a fan of stories/articles highlighting the importance of Taiwan in the technology industry, so I was especially pleased that one of my favorite publications recently put out an article highlighting the very key Computex industry conference, the role of the Taiwanese government’s ITRI R&D organization in cultivating Taiwan’s technology sector, and the rise of Taiwan’s technology company stars (Acer, HTC, Mediatek, and TSMC).

Some of the more interesting insights are around two of the causes the article attributes to Taiwan’s disproportionate prominence in the global technology supply chain:

Much of the credit for the growth of Taiwan’s information technology (IT) industry goes to the state, notably the Industrial Technology Research Institute (ITRI). Founded in 1973, ITRI did not just import technology and invest in R&D, but also trained engineers and spawned start-ups: thus Taiwan Semiconductor Manufacturing Company (TSMC), now the world’s biggest chip “foundry”, was born. ITRI also developed prototypes of computers and handed the blueprints to private firms.

Taiwan’s history also helps make it the “best place in the world to turn ideas into physical form,” says Derek Lidow of iSuppli, a market-research firm. Japan colonised the island for half a century, leaving a good education system. Amid the turmoil of the Kuomintang’s retreat to Taiwan from mainland China, engineering was encouraged as a useful and politically uncontroversial discipline. Meanwhile, strong geopolitical ties with America helped foster educational and commercial links too. Western tech firms set up shop in Taiwan in the 1960s, increasing the pool of skilled workers and suppliers.

It also provides some interesting lessons for countries like Russia who are struggling to gain their own foothold in the lucrative technology industry:

  • image Facilitate the building of industrial parks with strong ties to R&D centers of excellence. Taiwan’s ITRI helped build the technical expertise Taiwan needed early on to gain ground in the highly competitive and sophisticated technology market by seeding it with resources and equipment. The government’s cooperation in the creation of Hsinchu Science and Industrial Park near ITRI headquarters and two major universities helped erect the community of technologists, engineers, and businessmen that’s needed to achieve a self-sustaining Silicon Valley.
  • Make strategic bets on critical industries and segments of the value chain. Early on, ITRI recognized the strategic importance of the semiconductor industry and went out of its way to seed the creation of Taiwan’s foundries. This was uniquely far-sighted, as it not only allowed Taiwan to participate in a vital industry but it also helped create the “support network” that Taiwan needed for its own technology industry to flourish. While semiconductor giants like Intel and Samsung can afford the factories to build their own chips, small local companies are hard-pressed to (see my discussion of the foundry industry as a disruptive business model). Having foundries like TSMC nearby lets smaller local companies compete on a more even footing with larger companies, and these local companies in turn will not only grow but also provide the support basis for still other companies.
  • Build a culture which encourages talent (domestic and foreign) to participate in strategic industries. This is one example where it’d be best not to imitate Taiwan. But, as the Economist points out, the political turmoil in Taiwan until the mid-80s made politically neutral careers such as engineering more attractive. In the same way that “culture” drove a big boom in technology in Taiwan, the environment which fostered smart and entrepreneurial engineers helped bring about the rise of the Silicon Valley as a global technology center (with the defense industry playing a similar role as Taiwan’s ITRI). Countries wishing to replicate this will need to go beyond just throwing money at speculative industries, but find their own way to encourage workers to develop the right set of skills and talents and to openly make use of them in simultaneously collaborative and entrepreneurial/business-like ventures. No amount of government subsidies or industrial park development could replace that.
  • image Learn as you go. To stay relevant, you need to be an old dog who learns new tricks. The Taiwanese technology industry, for example, is in a state of transition. Like Japan before it, it is learning to adapt to a world in which its cost position is not supreme and where its historical lack of focus on branding and intellectual property-backed R&D is a detriment rather than a cost-saving/customer-enticing play. But, the industry is not standing still. In conjunction with ITRI, the industry is learning to focus on design and IP and branding. ITRI itself has (rightfully) taken a less heavy-handed approach in shepherding its large and flourishing industry, now encouraging investment in the new strategic areas of wireless communications and LEDs.

Jury’s still out on lesson #5 (which is why I didn’t mention it) – have some sort of relation to me – after all, I was born in Taiwan and currently live in the Silicon Valley… 🙂

Leave a Comment

Apple to buy Intrinsity?

I recently read an interesting rumor off of tech blog Ars Technica that Apple has acquired small processor company Intrinsity – who’s website is, as of the time of this writing, down.

imageIn the popular tech press, very few self-professed gadget fans are aware of the nuances of the chip technology which powers their favorite devices. So, first question, what does Intrinsity and why would Apple be interested in them? Intrinsity is a chip design company known for its expertise in making existing processor designs faster and more efficient. They’ve been retained in the past by ATI (the graphics chip company which is now part of AMD) to enhance their GPU offering, Applied Micro (formerly AMCC) to help speed up their embedded processors, and more recently were used by Samsung (and presumably Apple) to speed up the ARM processor technology which powers the applications on the iPhone and the iPad.

Second question, then, would Apple do it? Questions about Apple are very difficult to answer – in part because of the extreme amount of hype and rumor surrounding them, but also because they tend to “think different” about business strategy. Normally, my intuition would say that this deal is unlikely to make much sense. I’ll admit I haven’t looked at the deal terms or Intrinisty’s finances, but my guess is Intrinsity has a flourishing business with other chip companies which would probably be jeopardized by Apple’s acquisition (especially now that Apple is itself sort of a chip design company and will probably want to de-emphasize the rest of Intrinsity’s activities). An acquisition like this could also be risky as Apple’s core strengths lie in building and designing a small number of well-integrated hardware/software products. While most analysts suspect that Apple contributed a huge amount to the design of the Samsung chip that’s currently in the iPhone, Apple is unlikely to have a culture or set of corporate processes that match Intrinsity’s, and I suspect nursing a chip technology group while also pushing the edge on product design and innovation at some point just becomes too difficult to do (which may partially explain the exodus of PA Semi, Apple’s other chip company purchase, engineers post-acquisition).

Of course, Apple is not your ordinary technology company, and there are definitely major benefits Apple could gain from this. The most obvious is that Apple can avoid paying licensing, royalty, and service fees to Intrinsity (which can be quite large if Apple continues to ship as many products as it does now) if it brings them in-house. Strategically, if Intrinsity is truly as good as they claim (I’ve read my fair share of rumors that the A4 processor in the iPad was a joint development effort from Samsung, Apple, and Intrinsity), then Apple may also want to take this valuable chess piece off the table for its competitors. Its no secret that major chip vendors like Qualcomm, NVIDIA, Texas Instruments, and Intel see the mobile chip space as the next hot growth area – Apple could perceive leaving Intrinsity out there as a major risk to maintaining its own device performance against the very impressive Snapdragon, Tegra, and OMAP (and potentially Intel Atom) product lines. imageThis is a similar move to what Apple did with its equity stake in Imagination Technologies, the company that licenses the graphics technology that powers the iPhone, the Palm Pre, and Motorola’s Droid. Its widely believed that, had Imagination been willing (and had Intel not also increased its stake in Imagination), Imagination would currently be an Apple division – highlighting Apple’s preference to not license technology which could potentially remain available to its competitors, but to bring it in-house.

image

So, in the end, does an Apple-Intrinsity deal make sense? Or is this just a rumor to be dismissed? It’s hard to say for sure (especially without knowing much about Intrinsity’s finances or the price offered), but if Intrinsity has key talent or intellectual property that Apple needs for its new devices, then Apple’s extremely high volume (and thus large payments to Intrinsity) could be the basis for fairly sizable financial benefits from such a deal. More importantly, on a strategic level, Apple’s need to maintain a performance lead over new Android (and Symbian and Windows Phone 7) devices could be all the justification needed for swallowing this attractive asset (note: AnandTech’s preliminary review shows the iPad outperforming Google’s Nexus One on web rendering speed – although how much of this is due to the iPad having a bigger battery is up for debate). Its hard to say for sure without knowing much about how profitable Intrinsity is, how much of its business comes from Apple/Samsung, and what sort of price Apple can negotiate, but there is definitely a lot of reason to do it.

(logo credit) (logo credit) (smartphone cartoon credit)

One Comment

Innovator’s Business Model

image A few weeks back, I wrote a quick overview of Clayton Christensen’s explanation for how new technologies/products can “disrupt” existing products and technologies. In a nutshell, Christensen explains that new “disruptive innovations” succeed not because they win in a head-to-head comparison with existing products (i.e. laptops versus desktops), but because they have three things:

  1. Good enough performance in one area for a certain segment of users (i.e. laptops were generally good enough to run simple productivity applications)
  2. Very strong performance on an unrelated feature which eventually will become very important for more than one small niche (i.e. laptops were portable, desktops were not, and that became very important as consumers everywhere started demanding laptops)
  3. Have the potential to improve by leveraging their industry learning curve to the point where they can compete head-to-head with an existing product (i.e. laptops now can be as fast if not faster than most desktops)

But, while most people think of Christensen’s findings as applied to product and technology shifts, this model of how innovations overtake one another can be just as easily applied to business models.

A great example of this lies in the semiconductor industry. For years, the dominant business model for semiconductor companies was the Integrated Device Manufacturer (IDM) model – a business model whereby semiconductor companies both designed and manufactured their own product. The primary benefit of this was tighter integration of design and manufacturing. Semiconductor manufacturing is highly sophisticated, requiring all sorts of specialized processes and chemicals and equipment, and there are a great deal of intricacies between one’s designs and one’s manufacturing process. Having both design and manufacturing under one roof allowed IDMs to create better products more quickly as they were able to exploit the interplays between design and manufacturing and more readily correct problems as they arose. IDMs were also able to tweak their manufacturing processes to push specific features, letting IDMs differentiate their products from their peers.

image But, a new semiconductor model emerged in the early 1990s – the fabless model. Unlike the IDM model, fabless companies don’t own their own semiconductor factories (called fabs – hence the name “fabless”) and outsource their manufacturing to either IDMs with spare manufacturing capacity or dedicated contract manufacturers called foundries (the two largest of which are based in Taiwan).

At first, the industry scoffed at the fabless model. After all, these companies could not tightly link their designs to manufacturing, had to rely on the spare capacity of IDMs (who would readily take it away if they needed it) or on foundries in Taiwan, China, and Singapore which lagged the leading IDMs in manufacturing capability by several years.

But, the key to Christensen’s disruptive innovation model is not that the “new” is necessarily better than the “old,” but that it is good enough on one dimension and great on other, more important dimensions. So, while fabless companies were at first unable to keep up in terms of bleeding edge manufacturing technology with the dominant IDMs, the fabless model had a significant cost advantage (due to fabless companies not needing to build and operate expensive fabs) and strategic advantage, as their management could focus their resources and attention on building the best designs rather than also worrying about running a smooth manufacturing setup.

The result? Fabless companies like Xilinx, NVIDIA, Qualcomm, and Broadcom took the semiconductor industry by storm, growing rapidly and bringing their allies, the foundries, along with them to achieve technological parity with the leading IDMs. This model has been so successful that, today, much of the semiconductor space is either fabless or pursuing a fab-lite model (where they outsource significant volumes to foundries, while holding on to a few fabs only for certain products), and TSMC, the world’s largest foundry, is considered to be on par in manufacturing technology with the last few leading IDMs (i.e. Intel and Samsung). This gap has been closed so impressively, in fact, that former IDM-technology leaders like Texas Instruments and Fujitsu have now decided to rely on TSMC for their most advanced manufacturing technology.

To use Christensen’s logic: the fabless model was “good enough” on manufacturing technology for a niche of semiconductor companies, but great in terms of cost. This cost advantage helped the fabless companies and their allies, the foundries, to quickly move up the learning curve and advance in technological capability to the point where they disrupted the old IDM business model.

This type of disruptive business model innovation is not limited to imagethe semiconductor industry. A couple of weeks ago The Economist ran a great series of articles on the mobile phone “ecosystem” in emerging markets. The entire time while I was reading it, I was struck by the numerous ways in which the rise of the mobile phone in emerging markets was creating disruptive business models. One in particular caught my eye as something which was very similar to the fabless semiconductor model story: the so-called “Indian model” of managing a mobile phone network.

Traditional Western/Japanese mobile phone carriers like AT&T and Verizon set up very expensive networks using equipment that they purchase from telecommunications equipment providers like Nokia-Siemens, Alcatel-Lucent, and Ericsson. (In theory,) the carriers are able to invest heavily in their own networks to roll out new services and new coverage because they own their own networks and because they are able to charge customers, on average, ~$50/month. These investments (in theory) produce better networks and services which reinforce their ability to charge premium dollar on a per customer basis.

In emerging markets, this is much harder to pull off since customers don’t have enough money to pay $50/month. The “Indian model”, which began in emerging countries like India, is a way for carriers in  low-cost countries to adapt to the cost constraints imposed by the inability of customers to pay high $50/month bills, and is generally thought to consist of two pieces. The first involves having multiple carriers share large swaths of network infrastructure, something which many Western carriers shied away from due to intellectual property fears and questions of who would pay for maintenance/traffic/etc. Another plank of the “Indian model” is to outsource network management to equipment providers (Ericsson helped to pioneer this model, in much the same way that the foundries helped the first fabless companies take off) — again, something traditional carrier shied away from given the lack of control a firm would have over its own infrastructure and services.

Just as in the fabless semiconductor company case, this low-cost network management business model has many risks, but it has enabled carriers in India, Africa, and Latin America to focus on getting and retaining customers, rather than building expensive networks. The result? We’re starting to see some Western carriers adopt “Indian model” style innovations. One of the most prominent examples of this is Sprint’s deal to outsource its day-to-day network operations to Ericsson! Is this a sign that the “Indian model” might disrupt the traditional carrier model? Only time will tell, but I wouldn’t be surprised.

(Image credit) (Image credit – Foundry market share) (Image credit – mobile users via Economist)

3 Comments

POWER trip

image

I recently read The Race for a New Game Machine, a new book which details the trials and tribulations behind the creation of the chips (which run on the POWER architecture, hence the title of this post) which powered Microsoft’s Xbox360 and Sony’s Playstation 3 next-gen gaming consoles.

The interesting thing that the book reveals is that the same IBM team responsible for designing the Playstation 3 chip (the Cell) with support from partners Sony and Toshiba was asked halfway through the Cell design process to adapt the heart of the Playstation 3 chip for the chip which would go into Microsoft’s XBox360 (the Xenon)!

Ironically, even though work on the Xbox360 started way after work on the Playstation 3’s chip, due to manufacturing issues, Microsoft was able to actually have a test chip BEFORE Sony did.

As the book was written from the perspective of David Shippy and Mickie Phipps, two the engineering leads from IBM, the reader gets a first-hand account of what it was like to be on the engineering team. While the technical details are more watered down than I would have personally liked (to be able to market this to the broader public), the book dove a lot deeper into the business/organizational side of things than I thought IBM legal would allow.

Four big lessons stood out to me after reading this:

  • Organization is important. Although ex-IBM CEO Lou Gerstner engineered one of the most storied corporate turnarounds of all time, helping to transform IBM from a failing mainframe company into a successful and well-integrated “solutions” company, Shippy and Phipps’ account reveal a deeply dysfunctional organization. Corporate groups pursued more projects than the engineering teams could support, and rival product/engineering groups refused to work together in the name of marking territory. In my mind, the Cell chip failed in its vision of being used as the new architecture for all “smart electronic devices” in no small part because of this organizational dysfunction.
  • Know the competition. One thing which stood out to me as a good bestimage practice for competitive engineering projects was the effort described in an early chapter about IBM’s attempt to predict how Intel’s chips would perform during the timeframe of the product launch. I’m not sure if this is done often in engineering efforts, but the fact that IBM tried to understand (and not undersell) the capabilities of Intel’s chips during the launch window helped give the IBM team a clear goal and set of milestones for determining success. That their chip continues to have the highest operating clock speed and a throughput computing power which vastly exceeds Intel’s high-end chips is a testament to the success of that effort.
  • Morale is important. If there was one feeling that the authors were able to convey in the book, it was frustration. Frustration at the organizational dysfunction which plagued IBM. Frustration at not quite ethical shenanigans that IBM played in to deliver the same processing core to two competitors. Frustration at morale-shattering layoffs and hiring freezes. It’s no secret today that IBM’s chip-making division is not the most profitable division in IBM (although this is partly because IBM relies on the division not to make profits, but to give its server products a technology advantage, which then lets them sell more profitable software and services). IBM is certainly not doing itself any favors, then, by working its engineers to the point of exhaustion. Seeing how both authors left IBM during or shortly after this project, I can only hope that IBM has changed things, or else the world may be short yet another talented chipmaker.
  • Move like a butterfly, sting like a bee. Why did Microsoft “get the jump” on Sony, despite the latter starting far far in advance? I trace it to two things. First, immediately upon seeing an excellent new chip technology (ironically, the core processor for the Playstation 3), they seized on the opportunity. They refused to take a different chip from what they wanted, they put their money where their mouth was, and they did it as fast as they could. Second, Microsoft set up a backup manufacturing line in Singapore (at a contract chip manufacturer called Chartered). This was expensive and risky, but Microsoft realized it would be good insurance against risk at IBM’s line and a good way to quickly ramp up production. This combination of betting big, but betting smart(with a way to cover one self if one is wrong) is a hallmark of Microsoft’s business strategy. And, in this case, they made the right call. The Xbox 360, while not performing as well as Nintendo’s Wii (which incidentally runs an IBM POWER chip as well), has still been fairly successful for Microsoft (having the highest attach rate – games sold per machine – of any console), and they had the backup plan necessary to deal with the risk that IBM’s manufacturing process would run into problems (which it did).

If you’re interested in the tears and sweat that went into designing IBM’s “PB” processing core (it’s revealed in the book that PB stands for PlayBox – an in-joke by Shippy’s team about how the technology being designed was for both the PLAYstation 3 and the xBOX), some first-hand account of how difficult it is to design next-generation semiconductor products, or how IBM got away with designing the same product for two competitors, I’d highly recommend this book.

(Image credit – book cover) (Image – Cell chip)

Book: The Race for a New Game Machine (Amazonlink)

Leave a Comment

Sovereign Wealth Matters

My friend Serena, who you may know as one of the co-founders of My Mom is a Fob and My Dad is a Fob, is currently trying to find a way out of Thailand, something which protests at Bangkok’s two airports has made much more difficult. I wish Serena and her family the best of luck and a safe trip back.

imageWhile a lot of press attention is dedicated to the direct why’s of the protests (demands that the current Prime Minister step down because of his ties to a previously deposed Prime Minister, his brother), less attention is paid to the role that Singaporean sovereign wealth fund Temasek Holdings played in the whole ordeal.

The Former Prime Minister made the mistake of selling his large 50% stake in Thai telecommunications company Shin corporation to Temasek, despite:

  • being accused of insider trading only a short while before
  • violating a law banning turning over majority control of telecommunications companies by foreign companies
  • making the sale without paying any capital gains taxes

The result of these accusations were widespread riots, the Prime Minister dissolving Parliament, and, eventually, him being removed by a military coup.

image And so, what have we learned here? Sovereign Wealth Funds are not just mere curiosities whereby oil-rich (Dubai, Mubadala, Norway, etc.) and Asian countries (China, Singapore) buy up HUGE stakes in companies private equity style (some of the research I did on these funds back in January put their total global size at about ~$3 trillion). They have serious political consequences, as the world is only beginning to discover:

Yes, we’re in the midst of a global recession right now, but think – what better time for a sovereign wealth fund to buy up companies then when the prices are low and when governments are least likely to raise a fuss about someone willing to inject capital into their struggling businesses?

(Image Source) (Image Source)

Leave a Comment

Playing with Monopoly

imageWith the recent challenges to Google’s purchase of Doubleclick, Microsoft’s endless courtship of Yahoo, and the filing of more papers in the upcoming Intel/AMD case, the question of “why should the government break up monopolies?” becomes much more relevant.

This is a question that very few people ask, even though it is oftentimes taken for granted that the government should indeed engage in anti-trust activity.

The logic behind modern anti-trust efforts goes back to the era of the railroad, steel, and oil trusts of the Gilded Age, when massive and abusive firms engaged in collusion and anti-competitive behavior to fix prices and prevent new entrants from entering into the marketplace. As any economist will be quick to point out, one of the secrets to the success behind a market economy is competition – whether it be workers competing with workers to be more productive or firms competing with firms to deliver better and cheaper products to their customers. When you remove competition, there is no longer any pressing reason to guarantee quality or cost.

So – we should regulate all monopolies, right? Unfortunately, it’s not that simple. The logic that competition is always good is greatly oversimplified, as it glosses over 2 key things:

  1. It’s very difficult to determine what is a monopoly and what isn’t.
  2. Technology-driven industries oftentimes require large players to deliver value to the customer.

What’s a Monopoly?

image

While we would all love monopolies to have clear and distinguishable characteristics – maybe an evil looking man dressed in all black laughing sinisterly as his diabolic plans destroy a pre-school? – the fact of the matter is that it is very difficult for an economist/businessperson to really tell what counts as a monopoly and what doesn’t, for four key reasons:

  1. Many of the complaints and lawsuits brought against “monopolies” are brought on by competitors. Who is trying to sue Intel? AMD. Who complained loudly about Microsoft’s bundling of Internet Explorer into Windows? Netscape.
  2. “Market share” has no meaning. In a sense, there are a lot of monopolies out there. Orson Scott Card has a 100% market share in books pertaining to the Ender’s Game series. McDonald’s has a 100% market share in Big Macs. This may seem like I’m just playing with semantics, but this is actually a fairly serious problem in the business world. I would even venture that a majority of growth strategy consulting projects are due to the client being unable to correctly define the relevant market and relevant market share.
  3. What’s “monopoly-like” may just be good business. Some have argued that Microsoft and Intel are monopolies in that they are bullies to their customers, aggressively pushing PC manufacturers to only purchase from them. But, what is harder to discern is how this is any different from a company that offers aggressive volume discounts? Or that hires the best-trained negotiators? Or that knows how to produce the best products and demands a high price for them? Sure, Google is probably “forcing” its customers to pay more to advertise on Google, but if Google’s services and reach are the best, what’s wrong with that?
  4. “Victims” of monopolies may just be lousy at managing their business. AMD may argue that Intel’s monopoly power is hurting their bottom line, but at the end of the day, Intel isn’t directly to blame for AMD’s product roadmap mishaps, or its disastrous acquisition of ATI. Google isn’t directly to blame for Microsoft’s inability to compete online.

Big can be good?

This may come as a shock, but there are certain cases where large monolithic entities are actually good for the consumer. Most of these lie around technological innovation. Here are a few examples:

  • Semiconductors – The digital revolution would not have been possible without the fast, power-efficient, and tiny chips which act as their brains. What is not oftentimes understood, however, is the immense cost and time required to build new chips. It takes massive companies with huge budgets to build tomorrow’s chips. It’s for this reason that most chip companies don’t run their own manufacturing centers and are steadily slowing down their R&D/product roadmaps as it becomes increasingly costly to design and build out chips.
  • Pharmaceuticals – Just as with semiconductors, it is very costly, time-consuming, and risky to do drug development. Few of today’s biotech startups can actually even bring a drug to market — oftentimes hoping to stay alive just long enough to partner with or be bought by a larger company with the money and experience to jump through the necessary hoops to take a drug from benchside to bedside.
  • Software platforms – Everybody has a bone to pick with Microsoft’s shoddy Windows product line. But what few people recognize is how much the software industry benefited from the role that Microsoft played early on in the computer revolution. By quickly becoming the dominant operating system, Microsoft’s products made it easier for software companies to reach wide audiences. Instead of designing 20 versions of every application/game to run on 20 OS’s, Microsoft made it easy to only have to design one. This, of course, isn’t saying that we need a OS monopoly right now to build a software industry, but it is fair to say that Microsoft’s early “monopoly” was a boon to the technology industry.

The problem with today’s anti-trust rules and regulations is that they are legal rules and regulations, not economic ones. In that way, while they may protect against many of the abuses of the Gilded Age (by preventing firms from getting 64.585% market share and preventing them from monopolistic action 1 through 27), they also unfortunately act as deterrents to innovation and good business practice.

Instead, regulators need to try to take a broader, more holistic view of anti-trust. Instead of market share litmus tests and paying attention to sob stories from the Netscapes of the world, regulators need to really focus on first, determining if the offender in question is acting harmfully anticompetitive at all, and second if there is credible economic value in the institutions they seek to regulate.

Leave a Comment
%d bloggers like this: