Skip to content →

Category: Blog

Phylo

(Cross posted to Bench Press)

A few years ago, I blogged about an ingenious crowdsourced game called Fold.It. The concept was pretty simple:

  • Use human intuition to help solve complicated three-dimensional protein folding challenges which is oftentimes as effective but significantly faster & cheaper than computational algorithms
  • Pool together lots of human volunteers
  • Turn the whole experience into a game to get more volunteers to spend more time

The result was a nifty little game which contributed findings which have made it, to date, into a number of peer-reviewed publications (see PNAS paper here and Nature Structure & Molecular Biology paper here)!

Well some researchers at McGill University in Canada want to take a page out of this playbook with a game they built called Phylo (HT: MedGadget) to help deal with another challenging issue in bioinformatics: multiple sequence alignment. In a nutshell, to better understand DNA and how it impacts life, we need to see how stretches of DNA line up with one another. Now, computers are extremely good at taking care of this problem for short stretches of DNA and for “roughly” aligning longer stretches of DNA – but its fairly difficult and costly to do it accurately for long stretches using computer algorithms.

People, however, are curiously intuitive about patterns and shapes. So, the researchers turned the multiple sequence alignment problem into a puzzle game they’ve called Phylo (see image below) where the goal is to line up multiple colored blocks. Players tackle the individual puzzles (in a browser or even on their mobile phone) and the researchers aggregate all of this into improved sequence alignments which help them better understand the underlying genetics of disease.

image

And how has it been doing? According to the McGill University press release:

So far, it has been working very well. Since the game was launched in November 2010, the researchers have received more than 350,000 solutions to alignment sequence problems. “Phylo has contributed to improving our understanding of the regulation of 521 genes involved in a variety of diseases. It also confirms that difficult computational problems can be embedded in a casual game that can easily be played by people without any scientific training,” Waldispuhl said. “What we’re doing here is different from classical citizen science approaches. We aren’t substituting humans for computers or asking them to compete with the machines. They are working together. It’s a synergy of humans and machines that helps to solve one of the most fundamental biological problems.

With the new games and platforms, the researchers are hoping to encourage even more gamers to join the fun and contribute to a better understanding of genetically-based diseases at the same time.

Try it out – I have to admit I’m not especially good with puzzle games, so I haven’t been doing particularly well, but the researchers have done a pretty good job with the design of the game (esp. relative to many other academic-inspired gaming programs that I’ve seen) – and who knows, you might be a key contributor to the next big drug treatment!

Leave a Comment

Qualcomm Trying to Up its PR with Snapdragon Stadium

As a nerd and a VC, I’m very partial towards “enabling technologies” – the underlying technology that makes stuff tick. That’s one reason I’m so interested in semiconductors: much of the technology we see today has its origins in something that a chip or semiconductor product enabled. But, despite the key role they (and other enabling technologies) play in creating the products that we know and love, most people have no idea what “chips” or “semiconductors” are.

Part of that ignorance is deliberate – chip companies exist to help electronics/product companies, not steal the spotlight. The only exception to that rule that I can think of is Intel which has spent a fair amount over the years on its “Intel Inside” branding and the numerous Intel Inside commercials that have popped up.

While NVIDIA has been good at generating buzz amongst enthusiasts, I would maintain that no other semiconductor company has quite succeeded at matching Intel in terms of getting public brand awareness – an awareness that probably has helped Intel command a higher price point because the public thinks (whether wrongly or rightly) that computers with “Intel inside” are better.

Well Qualcomm looks like they want to upset that. Qualcomm make chips that go into mobile phones and tablets and has benefitted greatly from the rise in smartphones and tablets over the past few years, getting to the point where some might say they have a shot at being a real rival for Intel in terms of importance and reach. But for years, the most your typical non-techy person might have heard about them is the fact that they have the naming rights to San Diego’s Qualcomm Stadium – home of the San Diego Chargers and former home of the San Diego Padres.

Well, on December 16th, in what is probably a very interesting test by Qualcomm to see if they can boost the consumer awareness of the Snapdragon product line they’re aiming at the next-generation of mobile phones and tablets, Qualcomm announced it will rename Qualcomm Stadium to Snapdragon Stadium for 10 days (coinciding with the San Diego County Credit Union Poinsettia Bowl and Bridgepoint Education Holiday Bowl) – check out the pictures from the Qualcomm blog below!

dsc_8635_0

cropped

Will this work? Well, if the goal is to get millions of people to, overnight, buy phones with Snapdragon chips inside – the answer is probably a no. Running this sort of rebranding for only 10 days for games that aren’t the SuperBowl just won’t deliver the right PR boost. But, as a test to see if their consumer branding efforts raises consumer awareness about the chips that power their phones, and potentially demand for “those Snapdragon watchamacallits” in particular? This might be just what the doctor ordered.

I, for one, am hopeful that it does work – I’m a sucker for seeing enabling technologies and the companies behind them like Qualcomm and Intel get the credit they deserve for making our devices work better, and, frankly, having more people talk about the chips in their phones/tablets will push device manufacturers and chip companies to innovate faster.

(Image credit: Qualcomm blog)

Leave a Comment

Motorola Solutions Takes on the Tablet

I mentioned a couple of months ago my recent “conversion” to the tablet: how I am now convinced that tablets are more than just a cool consumer device, but represent a new vector of compute power which will find itself going into more and more places.

One particular use case which fascinated me was in the non-consumer setting, what is mostly “fresh territory” for tablet manufacturers to pursue. But, whereas most manufacturers — like Lenovo and Toshiba — are taking on the non-consumer setting by chasing the traditional enterprise technology market, Motorola Solutions, which was spun out from the original Motorola alongside (but separate from) the consumer-oriented Motorola Mobility which was recently acquired by Google — they build things like hardware/IT systems for businesses and governments, has taken a much more customized approach (HT: EETimes) which really embodies some of the strengths of the Android approach.

111010_rcj_moto_tab

Instead of building yet another Android Honeycomb tablet, Motorola Solutions has built a ruggedized Android tablet called the ET1 (Enterprise Tablet 1 – hey, they sell mainly to industrial and government customers where you don’t need catchy names :-)), with the emphasis on the word “ruggedized”. Yes, it has a 7” touchscreen, but this really wasn’t meant for casual consumer use at home: its meant to be used in the field/factory setting, built with a strengthened case and Gorilla Glass screen (so that it can survive drops/spills/impacts), support for external accessories (i.e. barcode scanners, printers, holsters/cases), a special hot-swappable rapid charge battery pack so that you can re-juice the device without interrupting the device function, and a “hardened” (translation: more secure by stripping out unnecessary consumer-oriented capabilities) Android operating system with support for rapidly switching between multiple user profiles (because multiple employees might use the same device on different shifts).

Will this device be a huge success? Probably not by any consumer electronic manufacturer’s metric. After all, the tablet isn’t meant for consumers (and won’t be priced that way or sold through stores/consumer eCommerce sites). But, that’s the beauty of the Android approach. If you’re not building a consumer tablet, you don’t have to. In the same way that Android phone manufacturers/software developers can experiment with different price points/business models in Africa, manufacturers can leverage (and customize) Android to target different use models and form factors entirely to satisfy the needs of specific market segments/ecosystem players, taking what they need and changing/removing what they don’t. I don’t know for sure what Motorola Solutions is aiming to get out of this, but maybe the goal isn’t to put as many of these devices out there as possible but simply to add a few key accounts with which to sell other services/software. I have no idea, but the point is that an open platform lets you do things like this. Or, to put it more simply, as I said before about Linux/Android: “go custom or go home”.

(Image credit)

3 Comments

A Visit to 1800s London and, Oddly Enough, Taiwan

Those who follow my Twitter/Google Plus saw that I attended the Dickens Fair this past weekend (thanks to my lovely and talented friend Felicia for telling me about it and getting my girlfriend and I comped tickets!)

What is the Dickens Fair, you ask? Apparently, it’s a Bay Area tradition dating to the 1970s where a group of performers, businesses, and cooks set up an imitation of the London which famous author Charles Dickens (1812-1870) wrote about and lived in.

And, like with Comicon, costumes and cosplaying are not only tolerated, but encouraged!

The entire experience was very fun. The shops were all period – selling period crafts and clothing and food. It was fun to just walk around and check out what people were dressed as, what they were doing, the accents they were assuming, and the various performances by singers/dancers. Feeling a little out of place, I decided to buy a hat to better blend in:

2011-12-10_14-01-26_155

Another thing which turned out to be a fascinating experience was the antique book shop. While I didn’t buy anything, my girlfriend dug up a guide to the Japanese Empire written in 1914. At the time, the island of Taiwan was a part of the Japanese Empire so the book dedicates an entire chapter to describing it. While it was nice to hear good things about the island (about its beauty and nice climate), I was a little amused/shocked to hear the enormous amount of time the writer spent covering the “savage” aboriginal tribes and their practice of decapitation, and the extents to which the Japanese colonizers kept those practices at bay. Not really believing the writer, I turned to Wikipedia – and lo and behold, there apparently was widespread practice of headhunting amongst the aborigines!

That must explain why I’m so fierce and aggressive 🙂

2 Comments

Fat Flora

intestines-microflora

November’s paper was published in Nature in 2006, and covers a topic I’ve become increasingly interested in: the impact of the bacteria that have colonized our bodies  on our health (something I’ve blogged about here and here).

The idea that our bodies are, in some ways, more bacteria than human (there are 10x more gut bacteria – or flora — than human cells on our bodies) and that those bacteria can play a key role on our health is not only mind-blowing, it opens up another potential area for medical/life sciences research and future medicines/treatments.

In the paper, a genetics team from Washington University in St. Louis explored a very basic question: are the gut bacteria from obese individuals different from those from non-obese individuals? To study the question, they performed two types of analyses on a set of mice with a genetic defect leading to an inability of the mice to “feel full” (and hence likely to become obese) and genetically similar mice lacking that defect (the s0-called “wild type” control).

The first was a series of genetic experiments comparing the bacteria found within the gut of obese mice with those from the gut of “wild-type” mice (this sort of comparison is something the field calls metagenomics). In doing so, the researchers noticed a number of key differences in the “genetic fingerprint” of the two sets of gut bacteria, especially in the genes involved in metabolism.

imageBut, what did that mean to the overall health of the animal? To answer that question, the researchers did a number of experiments, two of which I will talk about below. First, they did a very simple chemical analysis (see figure 3b to the left) comparing the “leftover energy” in the waste (aka poop) of the obese mice to the waste of wild-type mice (and, yes, all of this was controlled for the amount of waste/poop). Lo and behold, the obese mice (the white bar) seemed to have gut bacteria which were significantly better at pulling calories out of the food, leaving less “leftover energy”.

imageWhile an interesting result, especially when thinking about some of the causes and effects of obesity, a skeptic might look at that data and say that its inconclusive about the role of gut bacteria in obesity – after all, obese mice could have all sorts of other changes which make them more efficient at pulling energy out of food. To address that, the researchers did a very elegant experiment involving fecal transplant: that’s right, colonize one mouse with the bacteria from another mouse (by transferring poop). The figure to the right (figure 3c) shows the results of the experiment. After two weeks, despite starting out at about the same weight and eating similar amounts of the same food, wild type mice that received bacteria from other wild type mice showed an increase in body fat of about 27%, whereas the wild type mice that received bacteria from the obese mice showed an increase of about 47%! Clearly, gut bacteria in obese mice are playing a key role in calorie uptake!

In terms of areas of improvement, my main complaint about this study is just that it doesn’t go far enough. The paper never gets too deep on what exactly were the bacteria in each sample and we didn’t really get a sense of the real variation: how much do bacteria vary from mouse to mouse? Is it the completely different bacteria? Is it the same bacteria but different numbers? Is it the same bacteria but they’re each functioning differently? Do two obese mice have the same bacteria? What about a mouse that isn’t quite obese but not quite wild-type either? Furthermore, the paper doesn’t show us what happens if an obese mouse has its bacteria replaced with the bacteria from a wild-type mouse. These are all interesting questions that would really help researchers and doctors understand what is happening.

But, despite all of that, this was a very interesting finding and has major implications for doctors and researchers in thinking about how our complicated flora impact and are impacted by our health.

(Image credit) (Figure 3 from the paper)

Paper: Turnbaugh et al., “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature (444). 21/28 Dec 2006. doi:10.1038/nature05414

One Comment

Some Career Advice for Students

Many students trying to pick classes/majors in college will end up consulting with their counselors/academic advisors who, in turn, will almost always reply with very generic advice along the lines of: “study what you love”.

But as my girlfriend once pointed out, the problem with asking academic advisors that question is that academic advisors tend to be academics – and in academia, you can make a career out of studying anything. Outside of academia, that is not so true. Look no further than the paradox of how we have record high unemployment for recent college graduates despite almost every startup I’ve spoken with expressing concerns about finding and retaining qualified employees?

Obviously, our education system is failing to meet the needs of our students and employers. But, other than hope that the system miraculously fixes itself, my advice to students is this: take classes that teach broadly employable skills. You don’t need to take a lot of them, and nobody’s asking you to major in a something that you don’t want to – college is, after all, about broadening your horizons and studying what interests you. But, in a competitive job market and a turbulent economy, the worker that is in the best position is the worker who can move between industries/jobs easily (getting out of bad jobs/industries and moving into better paid/more interesting ones) and who can quickly demonstrate value to their boss (so as to make them indispensable faster).

So what sort of skills am I referring to? Off the top of my head (I’m sure there are others), three come to mind:

  • Accounting – All organizations that deal with money need people with accounting chops. From my experience, the executives/employees who are the most versatile across industries are the CFOs — they can plug into almost any business or organization and can quickly help their employers out. You may not want to be an accountant, but in a pinch, having those skills can help you get hired or find work as you figure out your next move.
  • Programming – Programming as a skill is relatively generalizable. While I wouldn’t necessarily get an iPhone developer to write an operating system (or vice versa), folks with programming chops can quickly get up to speed on new projects at new companies, and, as a result, can quickly crank out functioning code to help with their employers.
  • Statistics – You don’t need to be a math genius to be hireable. But, as computers become faster and more important, more organizations are turning to number crunching as a way to stay competitive. Not only will “data scientists” and statisticians become more in demand, individuals who have familiarity with those tools will be in a better position at their companies and be able to quickly help out a new employer.

The skeptic will point out that a lot of this can be outsourced. And, that’s certainly true – but in my experience, there is not only a limit on what companies are willing to outsource, there is also just huge value for any employee to tack those skills onto what they are already doing. A salesperson who is also good at crunching statistics on who to sell to next is far more valuable than a “regular” salesperson. A marketing guy with programming chops probably has a better understanding of a product or a technology than a “regular” marketing guy. And, a operations guy who also understands the nitty gritty financial details is going to be able to do a better job than an operations guy who doesn’t. Not to mention: the skills are broadly applicable; so if one company doesn’t have a good spot, there’s always another organization somewhere that will.

4 Comments

My Google Reader Substitute

Its hard to believe that Google Reader has only been “dead” for a few weeks. I use the quotes because while the core RSS reader functionality is still going, the reason it was all-consuming for me (and, frankly, one of the biggest sources of my goodwill towards Google) – the social functionality – is dead and gone.

I tried using Google+ as a means of sharing for two weeks – I really did. But it didn’t stick. First, the sharing from within Google Reader was clunky at best – I had to hit the “+1” or the new “G+ share” button, then select the Reader circle I had made, and then do another click to share – awkward process. Secondly, Google+ just didn’t cut it with what I used Google Reader’s social functionality for. I use Google Reader to read. Google+ is great for sharing snippets and pictures and thoughts – but its not a reading platform, so treating it like a replacement for Google Reader’s sharing functionality was never going to make it. Lastly, the point I brought up from my previous post on different levels of interest on different types of content still rings true – the people who I shared with on Google Reader were opting in to my content shares – most of my friends on Google+ are opting in to my personal shares. The two aren’t always the same.

tumblr_kw1quz9KYe1qztcqj

So, ultimately, I threw in the towel and decided to use Tumblr as an alternative. As you may know, Tumblr is a popular and fairly versatile mini-blogging tool – it lies somewhere between Twitter (where you are limited to 140 characters) and WordPress in terms of simplicity. But, it packs a ton of cool features to make it, from what I can tell, an okay substitute for Google Reader’s sharing functionality:

  • a full-length RSS feed so that folks can subscribe to my “shares” from a reading platform like Google Reader
  • packs a lot of compelling sharing features (liking, “re-blogging”)
  • a browser bookmarklet pretty similar to what Google Reader had (so I can share things as I go)
  • support for custom domain (so my Tumblr is now officially http://tumblr.benjamintseng.com/)
  • support for Disqus (so it can do comments)
  • pretty versatile HTML/CSS templating system so I can do further customizations later

Its not perfect. Its not integrated into Google Reader anymore – so all sharing/interaction will need to be done using the bookmarklet or on the site directly. But, the full-length RSS feed means we can keep reading and the sharing/Disqus functionality means we still can like, re-share, and comment.

I’m hoping my friends who once used Google Reader will join me on Tumblr, and I’m hoping my friends who were using Tumblr all along will welcome me to their world :-). I just started with the integration, but I am hoping to play around with the templating system to more tightly integrate the two sites in the near future.

(Image credit)

Leave a Comment

Homing Stem Cell Missile Treatments

Another month, another paper

This month’s paper is about stem cells: those unique cells within the body which have the capacity to assume different roles. While people have talked at lengths about the potential for stem cells to function as therapies, one thing holding them back (with the main exception being bone marrow cells) is that its very difficult to get stem cells to exactly where they need to be.

With bone marrow transplants, hematopoietic stem cells naturally “home” (like a missile) to where they need to be (in the blood-making areas of the body). But with other types of stem cells, that is not so readily true, making it difficult or impossible to use the bloodstream as a means of administering stem cell therapies. Of course, you could try to inject, say, heart muscle stem cells directly into the heart, but that’s not only risky/difficult, its also artificial enough that you’re not necessarily providing the heart muscle stem cells with the right triggers/indicators to push them towards becoming normal, functioning heart tissue.

Researchers at Brigham & Women’s Hospital and Mass General Hospital published an interesting approach to this problem in the journal Blood (yes, that’s the real name). They used a unique feature of white blood cells that I blogged about very briefly before called leukocyte extravasation, which lets white blood cells leave the bloodstream towards areas of inflammation.

InflamResponse1

The process is described in the image above, but it basically involves the sugars on the white blood cell’s surface, called Sialyl Lewis X (SLeX), sticking to the walls of blood vessels near sites of tissue damage. This causes the white blood cell to start rolling (rather than flowing through the blood) which then triggers other chemical and physical changes which ultimately leads to the white blood cell sticking to the blood vessel walls and moving through.

imageThe researchers “borrowed” this ability of white blood cells for their mesenchymal stem cells. The researchers took mesenchymal stem cells from a donor mouse and chemically coated them with SLeX – the hope being that the stem cells would start rolling anytime they were in the bloodstream and near a site of inflammation/tissue damage. After verifying that these coated cells still functioned (they could still become different types of cells, etc), they then injected them into mice (who received injections in their ears with a substance called LPS to simulate inflammation) and used video microscopes to measure the speed of different mesenchymal stem cells in the bloodstream. In Figures 2A and 2B to the left, the mesenchymal stem cell coated in SLeX is shown in green and a control mesenchymal stem cell is shown in red. What you’re seeing is the same spot in the ear of a mouse under inflammation with the camera rolling at 30 frames per second. As you can see, the red cell (the untreated) moves much faster than the green – in the same number of frames, its already left the vessel area! That, and a number of other measurements, made the researchers conclude that their SLeX coat actually got their mesenchymal stem cells to slow down near points of inflammation.

But, does this slowdown correspond with the mesenchymal stem cells exiting the bloodstream? Unfortunately, the researchers didn’t provide any good pictures, but they did count the number of different types of cells that they observed in the tissue. When it came to ears with inflammation (what Figure 4A below refers to as “LPS ear”), the researchers saw an average of 48 SLeX-coated mesenchymal stem cells versus 31 uncoated mesenchymal stem cells within their microscopic field of view (~50% higher). When it came to the control (the “saline ear”), the researchers saw 31 SLeX-coated mesenchymal stem cells versus 29 uncoated (~7% higher). Conclusion: yes, coating mesenchymal stem cells with SLeX and introducing them into the bloodstream lets them “home” to areas of tissue damage/inflammation.

image

As you can imagine, this is pretty cool – a simple chemical treatment could help us turn non-bone-marrow-stem cells into treatments you might receive via IV someday!

But, despite the cool finding, there were a number of improvements that this paper needs. Granted, I received it pre-print (so I’m sure there are some more edits that need to happen), but my main concerns are around the quality of the figures presented. Without any clear time indicators or pictures, its hard to know what exactly the researchers are seeing. Furthermore, its difficult to see for sure whether or not the treatment did anything to the underlying stem cell function. The supplemental figures of the paper are only the first step in, to me, what needs to be a long and deep investigation into whether or not those cells do what they’re supposed to – otherwise, this method of administering stem cell therapies is dead in the water.

(Figures from paper) (Image credit: Leukocyte Extravasation)

Paper: Sarkar et al., “Engineered Cell Homing.” Blood. 27 Oct 2011 (online print). doi:10.1182/blood-2010-10-311464

Leave a Comment

The Monster

I was asked recently by a friend about my thoughts on the “Occupy Wall Street” movement. While people a heck of a lot smarter and more articulate than me have weighed in, most of it has been focused on finger-pointing (who’s to blame) and judgment (do they actually stand for anything, “its the Tea Party of the Left”).

JohnSteinbeck_TheGrapesOfWrathAs corny as it sounds, my first thought after hearing about “Occupy Wall Street” wasn’t about right or wrong or even really about politics: it was about John Steinbeck and his book The Grapes of Wrath . It’s a book I read long ago in high school, but it was one which left a very deep impression on me. While I can’t even remember the main plot (other than that it dealt with a family of Great Depression and Dust Bowl-afflicted farmers who were forced to flee Oklahoma towards California), what I do remember was a very tragic description of the utter confusion and helplessness that gripped the people of that era (from Chapter 5):

“It’s not us, it’s the bank. A bank isn’t like a man. Or an owner with fifty thousand acres, he isn’t like a man either. That’s the monster.”

“Sure,” cried the tenant men, “but it’s our land. We measured it and broke it up. We were born on it, and we got killed on it, died on it. Even if it’s no good, it’s still ours. That’s what makes it ours—being born on it, working it, dying on it. That makes ownership, not a paper with numbers on it.”

“We’re sorry. It’s not us. It’s the monster. The bank isn’t like a man.”

“Yes, but the bank is only made of men.”

“No, you’re wrong there—quite wrong there. The bank is something else than men. It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It’s the monster. Men made it, but they can’t control it.

And therein lies the best description of the tragedy of the Great Depression, and of every economic crisis that I have ever read. The many un- and under-employed people in the US are clearly under a lot of stress. And, like with the farmers in Steinbeck’s novel, its completely understandable that they want to blame somebody. And, so they are going to point to the most obvious culprits: “the 1%”, the bankers and financiers who work on “Wall Street”.

occupy_wall_street_poster_2_by_jia_flynn-d4ay3sb

But, I think Steinbeck understood this is not really about the individuals. Obviously, there was a lot of wrongdoing that happened on the part of the banks which led to our current economic “malaise.” But I think for the most part, the “1%” aren’t interested in seeing their fellow citizen unemployed and on the street. Even if you don’t believe in their compassion, their greed alone guarantees that they’d prefer to see the whole economy growing with everyone employed and productive, and their desire to avoid harassment alone guarantees they’d love to find a solution which ends the protests and the finger-pointing. They may not be suffering as much as those in the “99%”, but I’m pretty sure they are just as confused and hopeful that a solution comes about.

The real problem – Steinbeck’s “monster” – is the political and economic system people have created but can’t control. Our lives are driven so much by economic forces and institutions which are intertwined with one another on a global level that people can’t understand why they or their friends and family are unemployed, why food and gas prices are so expensive, why the national debt is so high, etc.

Now, a complicated system that we don’t have control of is not always a bad thing. After all, what is a democracy supposed to be but a political system that nobody can control? What is the point of a strong judiciary but to be a legal authority that legislators/executives cannot overthrow? Furthermore, its important for anyone who wants to change the system for the better to remember that the same global economic system which is causing so much grief today is more responsible than any other force for creating many of the scientific and technological advancements which make our lives better and for lifting (and continuing to lift) millions out of poverty such as those who live in countries like China and India.

But, its hard not to sympathize with the idea that the system has failed on its promise. What else am I (or anyone else) supposed to think in a world where corporate profits can go up while unemployment stays stubbornly near 10%, where bankers can get paid bonuses only a short while after their industry was bailed out with taxpayer money, and where the government seems completely unable to do more than bicker about an artificial debt ceiling?

But anyone with even a small understanding of economics knows this is not about a person or even a group of people. To use Steinbeck’s words, the problem is more than a man, it really is a monster. While we may not be able to kill it, letting it rampage is not a viable option either — the “Occupy Wall Street” protests are a testament to that. Their frustration is real and legitimate, and until politicians across both sides of the aisle and individuals across both ends of the income spectrum come together to find a way to “tame the monster’s rampage”, we’re going to see a lot more finger-pointing and anger.

(Image credit – Wikipedia)

Leave a Comment

Antibody-omics

I’m pretty late for my September paper of the month, so here we go

“Omics” is the hot buzz-suffix in the life sciences for anything which uses the new sequencing/array technologies we now have available. You don’t study genes anymore, you study genomics. You don’t study proteins anymore – that’s so last century, you study proteomics now. And, who studies metabolism? Its all about metabolomics. There’s even a (pretty nifty) blog post covering this with the semi-irreverent name “Omics! Omics!”.

Its in the spirit of “Omics” that I chose a Science paper from researchers at the NIH because it was the first time I have ever encountered the term “antibodyome”. For those of you who don’t know, antibodies are the “smart missiles” of your immune system – they are built to recognize and attack only one specific target (i.e. a particular protein on a bacteria/virus). This ability is so remarkable that, rather than rely on human-generated constructs, researchers and biotech companies oftentimes choose to use antibodies to make research tools (i.e. using fluorescent antibodies to label specific things) and therapies (i.e. using antibodies to proteins associated with cancer as anti-cancer drugs).

How the immune system does this is a fascinating story in and of itself. In a process called V(D)J recombination – the basic idea is that your immune system’s B-cells mix, match, and scramble certain pieces of your genetic code to try to produce a wide range of antibodies to hit potentially every structure they could conceivably see. And, once they see something which “kind of sticks”, they undergo a process called affinity maturation to introduce all sorts of mutations in the hopes that you create an even better antibody.

Which brings us to the paper I picked – the researchers analyzed a couple of particularly effective antibodies targeted at HIV, the virus which causes AIDS. What they found was that these antibodies all bound the same part of the HIV virus, but when they took a closer look at the 3D structures/the B-cell genetic code which made them, they found that the antibodies were quite different from one another (see Figure 3C below)

F3.large

What’s more, not only were they fairly distinct from one another, they each showed *significant* affinity maturation – while a typical antibody has 5-15% of their underlying genetic code modified, these antibodies had 20-50%! To get to the bottom of this, the researchers looked at all the antibodies they could pull from the patient – in effect, the “antibodyome”, in the same way that the patient’s genome would be all of his/her genes, —  and along with data from other patients, they were able to construct a “family tree” of these antibodies (see Figure 6C below)

F6.large

The analysis shows that many of the antibodies were derived from the same initial genetic VDJ “mix-and-match” but that afterwards, there were quite a number of changes made to that code to get the situation where a diverse set of structures/genetic codes could attack the same spot on the HIV virus.

While I wish the paper probed deeper into actual experimentation to take this analysis further (i.e. artificially using this method to create other antibodies with similar behavior), this paper goes a long way into establishing an early picture of what “antibodyomics” is. Rather than study the total impact of an immune response or just the immune capabilities of one particular B-cell/antibody, this sort of genetic approach lets researchers get a very detailed, albeit comprehensive look at where the body’s antibodies are coming from. Hopefully, longer term this also turns into a way for researchers to make better vaccines.

(Figure 2 and 6 from paper)

Paper:  Wu et al., “Focused Evolution of HIV-1 Neutralizing Antibodies Revealed by Structures and Deep Sequencing.” Science (333). 16 Sep 2011. doi: 10.1126/science.1207532

One Comment

Google Reader Blues

grlogoIf it hasn’t been clear from posts on this blog or from my huge shared posts activity feed, I am a huge fan of Google Reader. My reliance/use of the RSS reader tool from Google is second only to my use of Gmail. Its my main primary source of information and analysis on the world and, because a group of my close friends are actively sharing and commenting on the service, it is my most important social network.

Yes, that’s right. I’d give up Facebook and Twitter before I’d give up Google Reader.

I’ve always been disappointed by Google’s lack of attention to the product, so you would think that after announcing that they would find a way to better integrate the product with Google+ that I would be jumping for joy.

However, I am not. And, I am not the only one. E. D. Kain from Forbes says it best when he writes:

[A]fter reading Sarah Perez and Austin Frakt and after thinking about just how much I use Google Reader every day, I’m beginning to revise my initial forecast. Stay calm is quickly shifting toward full-bore Panic Mode.

(bolding and underlining from me)

Now, for the record, I can definitely see the value of integrating Google+ with Google Reader well. I think the key to doing that is finding a way to replace the not-really-used-at-all Sparks feature (which seems to have been replaced by a saved searches feature) in Google+ with Google Reader to make it easier to share high quality blog posts/content. So why am I so anxious? Well, looking at the existing products, there are two big things:

  • Google+ is not designed to share posts/content – its designed to share snippets. Yes, there are quite a few folks (i.e. Steve Yegge who made the now-famous-accidentally-public rant about Google’s approach to platforms vs Amazon/Facebook/Apple’s on products) who make very long posts on Google+ using it almost as a mini-blog platform. And, yes, one can share videos and photos on the site. However, what the platform has not proven to be able to share (and is, fundamentally, one of the best uses/features for Google Reader) is a rich site with embedded video, photos, rich text, and links. This blog post that you’re reading for instance? I can’t share this on Google+. All I can share is a text excerpt and an image – that reduces the utility of the service as a reading/sharing/posting platform.
  • Google Reader is not just “another circle” for Google+, it’s a different type of online social behavior. I gave Google props earlier this year for thinking through online social behavior when building their Circles and Hangouts features, but it slipped my mind then that my use of Google Reader was yet another way to do online social interaction that Google+ did not capture. What do I mean by that? Well, when you put friends in a circle, it means you have grouped that set of friends into one category and think of them as similar enough to want to receive their updates/shared items together and to send them updates/shared items, together. Now, this feels more natural to me than the original Facebook concept (where every friend is equal) and Twitter concept (where the idea is to just broadcast everything to everybody), but it misses one dynamic: followers may have different levels of interest in different types of sharing. When I share an article on Google Reader, I want to do it publicly (hence the public share page), but only to people who are interested in what I am reading/thinking. If I wanted to share it with all of my friends, I would’ve long ago integrated Google Reader shares into Facebook and Twitter. On the flip side, whether or not I feel socially close to the people I follow on Google Reader is irrelevant: I follow them on Google Reader because I’m interested in their shares/comments. With Google+, this sort of “public, but only for folks who are interested” sharing and reading mode is not present at all – and it strikes me as worrisome because the idea behind the Google Reader change is to replace its social dynamics with Google+

Now, of course, Google could address these concerns by implementing additional features – and if that were the case, that would be great. But, putting my realist hat on and looking at the tone of the Google Reader blog post and the way that Google+ has been developed, I am skeptical. Or, to sum it up, in the words of Austin Frakt at the Incidental Economist (again bolding/underlining is by me)

I will be entering next week with some trepidation. I’m a big fan of Google and its products, in general. (Love the Droid. Love the Gmail. Etc.) However, today, I’ve never been more frightened of the company. I sure hope they don’t blow this one!

4 Comments

Chrome Remote Desktop

A few weeks ago, I blogged about how the web was becoming the most important and prominent application distribution platform and about Google’s efforts to embrace that direction with initiatives like ChromeOS (Google’s operating system which is designed only to run a browser/use the internet), Native Client, and the Chrome Web Store.

Obviously, for the foreseeable future, “traditional” native applications will continue to have significant advantages over web applications. As much of a “fandroid”/fan of Google as I am, I find it hard to see how I could use a Chromebook (a laptop running Google’s ChromeOS) over a real PC today because of my heavy use of apps like Excel or whenever I code.

However, you can do some pretty cool things with web applications/HTML5 which give you a sense of what can one day be possible. Case in point: enter Chrome Remote Desktop (HT: Google Operating System), a beta extension for Google Chrome which basically allows you to take control of another computer running Chrome a la remote desktop/VNC. While this capability is nothing new (Windows had “remote desktop” built in since, at latest, Windows XP, and there are numerous VNC/remote desktop clients), what is pretty astonishing is that this app is built entirely using web technologies – whereas traditional remote desktops use non-web based communications and native graphics to create the interface to the other computer, Chrome Remote Desktop is doing all the graphics in the browser and all the communications using either the WebSocket standard from HTML5 or Google Talk’s chat protocol! (see below as I use my personal computer to remote-control my work laptop where I am reading a PDF on microblogging in China and am also showing my desktop background image where the Jedi Android slashes up a Apple Death Star)

image

How well does it work? The control is quite good – my mouse/keyboard movements registered immediately on the other computer – but the on-screen graphics/drawing speed was quite poor (par for the course for most sophisticated graphics drawing apps in the browser and for a beta extension). The means of controlling another desktop, while easy to use (especially if you are inviting someone to take a look at your machine) is very clumsy for some applications (i.e. a certain someone who wants to leave his computer in the office and use VNC/remote desktop to access it only when he needs to).

So, will this replace VNC/remote desktop anytime soon? No (nor, does it seem, were they the first to think up something like this), but that’s not the point. The point, at least to me, is that the browser is picking up more and more sophisticated capabilities and, while it may take a few more versions/years before we can actually use this as a replacement for VNC/remote desktop, the fact that we can even be contemplating that at all tells you how far browser technology has come and why the browser as a platform for applications will grow increasingly compelling.

One Comment

AGIS Visual Field Score Tool

One of the things I regret the most about my background is that I lack good knowledge/experience with programming. While I have dabbled (i.e. mathematical modeling exercises in college, Xhibitr, and projects with my younger brother), I am generally more “tell” than “show” when it comes to creating software (except when it comes to writing a random Excel macro/function).

So, when I found out that my girlfriend needed some help with her glaucoma research and that writing software was the ticket, I decided to go out on a limb and help her out (link to my portfolio page).

The basic challenge is that the ophthalmology research world uses an arcane but very difficult-to-do-by-hand scoring system for taking data on a glaucoma patient’s vision (see image below for the type of measurements that might be collected in a visual field test) and turning that into a score (the AGIS visual field score) on how bad a patient’s glaucoma is (as described in a paper from 1994 that is so old I couldn’t find a digital copy of it!).

visual-field-advanced-glaucoma

 

Kr_c_prog_langSo, I started by creating a program using the C programming language which would take this data in the form of a CSV (comma-separated values) file and spit out scores.

While I was pleasantly surprised that I still retained enough programming know-how to do this after a few weekends, the programming was an awkward text-based monstrosity which required the awkward step of converting two-dimensional visual field data into a flat CSV file. The desire to improve on that and the hope that my software might help others doing similar research (and might get others to build on it/let me know if I’ve made any errors) pushed me to turn the tool into a web application which I’ve posted on my site. I hope you’ll take a look! Instructions are pretty basic:

  • Sorry, only works with modern browsers (Internet Explorer 9, Firefox 7, Chrome, Safari, etc) – this simplified my life as now I don’t need to worry about Internet Explorer 6 and 7’s horrific standards support
  • Enter the visual field depression data(in decibels) from the visual field test into the appropriate boxes (the shaded entries correspond to the eye’s blind spot).
    • You can click on “Flip Orientation” to switch from left-eye to right-eye view if that is helpful in data entry.
    • You can also click on “Clear” to wipe out all the data entered and start from scratch. An error will be triggered if non-numeric data is entered or if not all of the values have been filled out.
    • Note: the software can accept depression values as negative or positive, the important thing is to stay consistent throughout each entry as the software is making a guess on depression values based on all the numbers being entered.
  • Click “Calculate” when you’re done to get the score

Hope this is helpful to the ophthalmology researchers out there!

(Image credit – example visual field) (Image credit – C Programming Language)

Leave a Comment

Two More Things

stevejobs

A few weeks ago, I did a little farewell tribute to Apple CEO and tech visionary Steve Jobs after he left the CEO position at Apple. While most observers probably recognized that the cause for his departure was his poor health, few probably guessed that he would die so shortly after he left. The tech press has done a great job of covering his impressive legacy and the numerous anecdotes/lessons he imparted on the broader industry, but there are a few things which stand out to me which deserve a little additional coverage:

  • Much has been said about Jobs’s 2005 Stanford graduation speech: it was moving the first time I read it (back in 2005), and I could probably dedicate a number of blog posts to it, but one of the biggest things I took from it which I haven’t seen covered as much lately was the resilience in the face of setbacks. Despite losing his spot at the company he built, Jobs pushed on to create NeXT and Pixar. And, while we all know Pixar today as the powerhouse behind movies such as Toy Story and Ratatouille, and most Apple followers recognize Apple’s acquisition of NeXT as the integral part of bringing Jobs back into the Apple fold, what very few observers realize is that, for a long time, NeXT and Pixar were, by most objective measures, failures. Despite Steve Jobs’s impressive vision and NeXT’s role in pioneering new technologies, NeXT struggled and only made its first profit almost 10 years after its founding – and only a measly $1 million despite taking many tens of millions of dollars from investors! If Wikipedia is to be believed, NeXT’s “sister” Pixar was doing so poorly that Jobs even considered selling Pixar to – gasp – Microsoft as late as 1994, just one year before Toy Story would turn things around. The point of all of this is not to knock Jobs, but to point out that Jobs was pretty familiar with setbacks. Where he stands out, however, is in his ability and willingness to push onward. He didn’t just wallow in self-pity after getting fired at Apple, or after NeXT/Pixar were forced to give up their hardware businesses – he found a way forward, making tough calls which helped guide both companies to success. And that resilience, I think, is something which I truly hope to emulate.
  • One thing which has stuck with me was a quote from Jobs on why he was opening up to his biographer, Walter Isaacson, after so famously guarding his own privacy: “I wanted my kids to know me … I wasn’t always there for them, and I wanted them to know why and to understand what I did.” It strikes me that at the close of his life, Jobs, one of the most successful corporate executives in history, is preoccupied not with his personal privacy, his fortune, his company’s market share, or even how the world views him, but with how his kids perceive him. If there’s one thing that Steve Jobs can teach us all, its that no amount of success in one’s career can replace success in one’s personal life.

(Image credit)

2 Comments

Solyndra and the Role of VCs and Government in Cleantech

Because of the subject matter here, I’ll re-emphasize the disclaimer that you can read on my About page: The views expressed in this blog are mine and mine alone and do not necessarily reflect the views of my current (or past) employers, their employees, partners, clients, and portfolio companies.

Solyndra-logo

If you’ve been following either the cleantech world or politics, you’ll have heard about the recent collapse of Solyndra, the solar company the Obama administration touted as a shining example of cleantech innovation in America. Solyndra, like a few other “lucky” cleantech companies, received loan guarantees from the Department of Energy (like having Uncle Sam co-sign its loans), and is now embroiled in a political controversy over whether or not the administration acted improperly and whether or not the government should be involved in providing such support for cleantech companies.

Given my vantage point from the venture capital space evaluating cleantech companies, I thought I would weigh in with a few thoughts:

  • The failure of one solar company is hardly a reason to doubt cleantech as an enterprise. In every entrepreneurial industry where lots of bold, unproven ideas are being tested, you will see high failure rates. And, therein lies one of the beauties of a market economy – what the famous economist Joseph Schumpeter called “creative destruction.” That a large solar company like Solyndra failed is not a failing of the industry – if anything it’s a good thing. It means that one unproven idea/business model (Solyndra’s) was pushed out in favor of something better (in this case, more advanced crystalline silicon technologies and new thin film solar technologies) which means the employees/customers of Solyndra can now move on to more productive pastures (possibly another cleantech company which has a better shot at success).
  • The failure of Solyndra is hardly a reason to doubt the importance of government support for the cleantech industry. I believe that a strong “cleantech” industry is a good thing for the world and for the United States. Its good for the world in that it represents new, more efficient methods of harnessing, moving, and using energy and is a non-political (and, so, less controversial to implement) approach to addressing the problems of man-made climate change. Its good for the United States in that it represents a major new driver of market demand that the US is particularly well-suited to addressing because of its leadership in technology & innovation at a time when the US is struggling with job loss/economic decline/competition abroad. Or, to put it in a more economic way, what makes cleantech a worthy sector for government support is its strategic importance in the future growth of the global economy (potentially like a new semiconductor/software industry which drove much of the technology sector over the past two decades), the likelihood that the private sector will underinvest due to not correctly valuing the positive externalities (social good), and the fact that…
  • Private sector investors cannot do it all when it comes to supporting cleantech. One of the criticisms I’ve heard following the Solyndra debacle is that the government should not leave the support of industries like cleantech to the private sector. While I’m sympathetic to that argument, my experience in the venture investing world is that many private investors are not well equipped to providing all the levels of support that the industry would need. Private investors, for instance, are very bad at (and tend to shy away from) providing basic sciences R&D support – that’s research which is not directly linked to the bottom line (and so is outside of what a private company is good at managing) and, in fact, should be conducted by academics who collaborate openly across the research community. Venture capital investors are also not that well-suited to supporting cleantech pilots/deployments – those checks are very large and difficult to finance. These are two large examples of areas where private investors are unlikely to be able to provide all the support that the industry will need to advance and areas where there is a strong role for the government to play.
  • With all that said, I think there are far better ways for the government to go about supporting its domestic cleantech industry. Knowing a certain industry is strategic and difficult for the private sector to support completely is one thing – effectively supporting it is another. In this case, I have major qualms about how the Department of Energy is choosing to spend its time. The loan guarantee program not only puts taxpayer dollars at risk directly, it also picks winners and losers– something that industrial policy should try very hard not to do. Anytime you have the ability to pick winners and losers, you will create situations where the selection of winners and losers could be motivated by cronyism/favoritism. It also exposes the government to a very real criticism: shouldn’t a private sector investor like a venture capitalist do the picking? Its one thing when these are small prize grants for projects – its another when its large sums of taxpayer dollars at risk. Better, in my humble opinion, to find other ways to support the industry like:
    • Sponsoring basic R&D to help the industry with the research it needs to break past the next hurdles
    • Facilitating more dialogue between research and industry: the government is in a unique position to encourage more collaboration between researchers, between industry, between researchers AND industry, and across borders. Helping to set up more “meetings of the minds” is a great, relatively low-cost way of helping push an industry forward.
    • Issuing H1B visas for smart immigrants who want to stay and create/work for the next cleantech startup: I remain flabbergasted that there are countless intelligent individuals who want to do research/work/start companies in the US that we don’t let in.
    • Subsidizing cleantech project/manufacturing line finance: It may be possible for the government to use tax law or direct subsidies to help companies lower their effective interest payments on financing pilot line/project buildouts. Obviously, doing this would be difficult as we would want to avoid supporting the financing of companies which could fail, but it strikes me that this would be easier to “get right” than putting large swaths of taxpayer money at risk in loan guarantees.
    • Taxing carbon/water/pollution:  If there’s one thing the government can do to drive research and demand for “green products” is to issue a tax which makes the consequences of inefficiency obvious. Economists call this a Pigovian tax and the idea is that there is no better way to get people to save energy/water and embrace cleaner energy than by making them. (Note: for those of you worried about higher taxes, the tax can be balanced out by tax cuts/rebates so as to not raise the total tax burden on the US, only shift that burden towards things like pollution/excess energy consumption)

    This is not a complete list (nor is it intended to be one), but its definitely a set of options which are supportive of the cleantech industry, avoid the pitfall of picking winners and losers in a situation where the market should be doing that, and, except for the last, should not be super-controversial to implement.

Sadly, despite the abundance of interesting ideas and the steady pace of innovation/business model innovation, Solyndra seems to have turned investors and the public more sour towards solar and cleantech more broadly. Hopefully, we get past this rough patch soon and find a way to more effectively channel the government’s energies and funds to bolstering the cleantech industry in its quest for clean energy and greater efficiency.

(Image credit)

3 Comments

The Bad Math of Comics Companies

A few months ago I posted on DC Comic’s publicized reboot of their entire comic book franchise and argued this sort of bold action could be a good thing for the industry. Well, the reboot happened, and what’s the verdict? While there have been some very promising new books (I was particularly pleased with Grant Morrison’s Action Comics and Scott Snyder’s Batman), there were a few which, in my humble opinion, were changed for the worse.

But, while my comics fanboi rage might have been quelled had the editorial decision been made in a way to pull in new readers, such was not the case in at at least one notable book which butchered some of my favorite characters, as the following webcomic from Shortpacked illustrates:

2011-09-26-math

Seriously, DC. Even discounting the fact that I’m a big Teen Titans fan (it was one of the first comic book series I actually read!) and that you butchered a great female character who already had a great degree of sensuality in your reboot into some mindless, preening nymphomaniac – how did it ever occur to you to use a character who might have been a nice “gateway comic” for new fans and turn her into something unrecognizable and unlovable? Great one, DC. I hope the next reboot works better…

(There’s a great io9 post which further illustrates the stupidity of DC here)

(Image credit – Shortpacked)

Leave a Comment

The Marketing Glory of NVIDIA’s Codenames

This is an old tidbit, but nevertheless a good one that has (somehow) never made it to my blog. I’ve mentioned before the private equity consulting world’s penchant for silly project names, but while code names are not rare in the corporate world, more often than not, the names don’t tend to be dull and not be of much use for a company. NVIDIA’s code names, however, are pure marketing glory.

Take NVIDIA’s high performance computing product roadmap (below) – these are products that use the graphics processing capabilities of NVIDIA’s high-end GPUs and turn them into smaller, cheaper, and more power-efficient supercomputing engines which scientists and researchers can use to crunch numbers (check out entries from the Bench Press blog for an idea of what researchers have been able to do with them). How does NVIDIA describe its future roadmap? It uses the names of famous scientists to describe its technology roadmap: Tesla (the great American electrical engineer who helped bring us AC power), Fermi (“the father of the Atomic Bomb”), Kepler (one of the first astronomers to apply physics to astronomy), and Maxwell (the physicist who helped show that electrical, magnetic, and optical phenomena were all linked).

cudagpuroadmap

Who wouldn’t want to do some “high power” research (pun intended) with Maxwell? 🙂

But, what really takes the cake for me are the codenames NVIDIA uses for its smartphone/tablet chips: its Tegra line of products. Instead of scientists, he uses, well, comic book characters (now you know why I love them, right?) :-). For release at the end of this year? Kal-El, or for the uninitiated, that’s the alien name for Superman. After that? Wayne, as in the alter ego for Batman. Then, Logan, as in the name for the X-men Wolverine. And then Stark, as in the alter ego for Iron Man.

Tegra_MWC_Update1

Everybody wants a little Iron Man in their tablet :-).

And, now I know what I’ll name my future secret projects!

(Image credit – CUDA GPU Roadmap) (Image credit – Tegra Roadmap)

One Comment

Android in Kenya

I mentioned before when discussing DCM’s Android Fund that Android is a truly global opportunity. While Nokia is probably praying that this is untrue, the recent success of Huawei in Kenya with its IDEOS phone illustrates that Android isn’t just doing well in the First World, its particular approach makes it well-suited to tackle the broader global market (HT: MIT Technology Review):

Smart phones surged in popularity in February after Safaricom, Kenya’s dominant telecom, began offering the cheapest smart phone yet on the market—an Android model called Ideos from the Chinese maker Huawei, which has been making inroads in the developing world. In Kenya, the price, approximately $80, was low enough to win more than 350,000 buyers to date.

That’s an impressive number for a region most in the developed world would probably write off as far too developing to be interesting. Now Huawei’s IDEOS line is not going to blow anyone away – its small, has a fairly low quality camera, and is pretty paltry on RAM. But, the fact that this device can hit the right price point to make the market real is a real advantage for the global Android ecosystem:

  • This is 350,000 additional potential Android users – not an earth-shattering number but its always good to have more folks buying devices and using them for new apps/services
  • It’s enticing new developers into the Android community, both from within Kenya as well as from outside of Kenya. As the MIT Technology Review article further points out:

    Over the past year, Hersman has been developing iHub, an organization devoted to bringing together innovators and investors in Nairobi. Earlier this month, a mobile-app event arranged by iHub fielded 100 entrants and 25 finalists for a $25,000 prize for best mobile app. The winner, Medkenya, developed by two entrepreneurs, offers health advice and connects patients with doctors. Its developers have also formed a partnership with the Kenyan health ministry, with a goal of making health-care information affordable and accessible to Kenyans…

    Some other popular apps are in e-commerce, education, and agriculture. In the last group, one organization riding the smart-phone wave is Biovision, a Swiss nonprofit that educates farmers in East Africa about organic farming techniques. Biovision is developing an Android app for its 200 extension field workers in Kenya and other East African countries.

  • Given the carrier-subsidy model and the high price and bulkiness of computers, this means that there could be an entire generation of individuals who’s main experience with the internet is from using Android devices, not from a traditional Windows/MacOS/Linux PC!

This ability to go ultra-low end and experiment with new partners/business models/approaches is an advantage of the fact that Android is a more open horizontal platform that can be adopted by more device manufacturers and partners. I wouldn’t be surprised to see further efforts by other Asian firms to expand into untapped markets like Africa, the Middle East, and Southeast Asia with other interesting go-to-market strategies like low-cost, pre-paid Android devices.

Leave a Comment

Avengers Assemble

My love of comics stems from something quite simple: good cartoons. I grew up watching cartoons based on classic comic book storylines. Shows like X-Men: The Animated Series, Spider-man: The Animated Series, and Batman: The Animated Series (which even won four Emmy Awards!) were just plain cool to a young boy who wanted to watch good guys beat up bad guys :-). It wasn’t until later that I discovered that they also had a depth and complexity to them that went beyond the usual cartoon. And it was that material which would help me catch up on years of comic book continuity when I finally made the shift to the comic medium.

So its with that context when I say that I think the new cartoon The Avenger’s: Earth’s Mightiest Heroes (which is also available on Netflix!) is really good. And the approach is quite clever: they have found a way to take the core team of Avengers from the comics (Captain America, Iron Man, Thor, the Hulk, Giant Man, the Wasp, the Black Panther, and Hawkeye) and seamlessly weave together both classic (i.e. Kang’s attempted conquest of the earth, the original battle with Ultron, etc) and modern (i.e. the big Marvel prison break which led to the founding of the New Avengers, Secret Invasion, etc) storylines and make it kid-friendly! The result is something which is modern in its approach, but fairly epic in its scope.

The_Avengers-Earth_s_Mightiest_Heroes-0

While the show has left (and will probably continue to leave) out things less suited for children, like some of its predecessors, it doesn’t shy away from the richness and complexity that these stories can provide. If you enjoy superheroes, or if you want a fun introduction to the Marvel universe that is on par with the quality from Batman: The Animated Series, or even if its just that you can’t wait for these guys:

avengers-new2

to take on this guy:

avengers-new3

then I’d highly recommend checking this series out.

(Image credit) (Image credit – Avengersite) (Image credit – Avengersite)

One Comment

Web vs native

imageWhen Steve Jobs first launched the iPhone in 2007, Apple’s perception of where the smartphone application market would move was in the direction of web applications. The reasons for this are obvious: people are familiar with how to build web pages and applications, and it simplifies application delivery.

Yet in under a year, Apple changed course, shifting the focus of iPhone development from web applications to building native applications custom-built (by definition) for the iPhone’s operating system and hardware. While I suspect part of the reason this was done was to lock-in developers, the main reason was certainly the inadequacy of available browser/web technology. While we can debate the former, the latter is just plain obvious. In 2007, the state of web development was relatively primitive relative to today. There was no credible HTML5 support. Javascript performance was paltry. There was no real way for web applications to access local resources/hardware capabilities. Simply put, it was probably too difficult for Apple to kludge together an application development platform based solely on open web technologies which would get the sort of performance and functionality Apple wanted.

But, that was four years ago, and web technology has come a long way. Combine that with the tech commentator-sphere’s obsession with hyping up a rivalry between “native vs HTML5 app development”, and it begs the question: will the future of application development be HTML5 applications or native?

There are a lot of “moving parts” in a question like this, but I believe the question itself is a red herring. Enhancements to browser performance and the new capabilities that HTML5 will bring like offline storage, a canvas for direct graphic manipulation, and tools to access the file system, mean, at least to this tech blogger, that “HTML5 applications” are not distinct from native applications at all, they are simply native applications that you access through the internet. Its not a different technology vector – it’s just a different form of delivery.

Critics of this idea may cite that the performance and interface capabilities of browser-based applications lag far behind those of “traditional” native applications, and thus they will always be distinct. And, as of today, they are correct. However, this discounts a few things:

  • Browser performance and browser-based application design are improving at a rapid rate, in no small part because of the combination of competition between different browsers and the fact that much of the code for these browsers is open source. There will probably always be a gap between browser-based apps and native, but I believe this gap will continue to narrow to the point where, for many applications, it simply won’t be a deal-breaker anymore.
  • History shows that cross-platform portability and ease of development can trump performance gaps. Once upon a time, all developers worth their salt coded in low level machine language. But this was a nightmare – it was difficult to do simple things like showing text on a screen, and the code written only worked on specific chips and operating systems and hardware configurations. I learned C which helped to abstract a lot of that away, and, keeping with the trend of moving towards more portability and abstraction, the mobile/web developers of today develop with tools (Python, Objective C, Ruby, Java, Javascript, etc) which make C look pretty low-level and hard to work with. Each level of abstraction adds a performance penalty, but that has hardly stopped developers from embracing them, and I feel the same will be true of “HTML5”.
  • Huge platform economic advantages. There are three huge advantages today to HTML5 development over “traditional native app development”. The first is the ability to have essentially the same application run across any device which supports a browser. Granted, there are performance and user experience issues with this approach, but when you’re a startup or even a corporate project with limited resources, being able to get wide distribution for earlier products is a huge advantage. The second is that HTML5 as a platform lacks the control/economic baggage that iOS and even Android have where distribution is controlled and “taxed” (30% to Apple/Google for an app download, 30% cut of digital goods purchases). I mean, what other reason does Amazon have to move its Kindle application off of the iOS native path and into HTML5 territory? The third is that web applications do not require the latest and greatest hardware to perform amazing feats. Because these apps are fundamentally browser-based, using the internet to connect to a server-based/cloud-based application allows even “dumb devices” to do amazing things by outsourcing some of that work to another system. The combination of these three makes it easier to build new applications and services and make money off of them – which will ultimately lead to more and better applications and services for the “HTML5 ecosystem.”

Given Google’s strategic interest in the web as an open development platform, its no small wonder that they have pushed this concept the furthest. Not only are they working on a project called Native Client to let users achieve “native performance” with the browser, they’ve built an entire operating system centered entirely around the browser, Chrome OS, and were the first to build a major web application store, the Chrome Web Store to help with application discovery.

While it remains to be seen if any of these initiatives will end up successful, this is definitely a compelling view of how the technology ecosystem evolves, and, putting on my forward-thinking cap on, I would not be surprised if:

  1. The major operating systems became more ChromeOS-like over time. Mac OS’s dashboard widgets and Windows 7’s gadgets are already basically HTML5 mini-apps, and Microsoft has publicly stated that Windows 8 will support HTML5-based application development. I think this is a sign of things to come as the web platform evolves and matures.
  2. Continued focus on browser performance may lead to new devices/browsers focused on HTML5 applications. In the 1990s/2000s, there was a ton of attention focused on building Java accelerators in hardware/chips and software platforms who’s main function was to run Java. While Java did not take over the world the way its supporters had thought, I wouldn’t be surprised to see a similar explosion just over the horizon focused on HTML5/Javascript performance – maybe even HTML5 optimized chips/accelerators, additional ChromeOS-like platforms, and potentially browsers optimized to run just HTML5 games or enterprise applications?
  3. Web application discovery will become far more important. The one big weakness as it stands today for HTML5 is application discovery. Its still far easier to discover a native mobile app using the iTunes App Store or the Android Market than it is to find a good HTML5 app. But, as platform matures and the platform economics shift, new application stores/recommendation engines/syndication platforms will become increasingly critical.

I can’t wait :-).

(Image credit – iPhone SDK)

22 Comments
%d bloggers like this: