Skip to content →

7 search results for "systems biology"

The Not So Secret (Half) Lives of Proteins

Ok, so I’m a little behind on my goal from last month to pump out my Paper a Month posts ahead of time, but better late than never!

This month’s paper comes once again from Science, as recommended to me by my old college roommate Eric. It was a pertinent recommendation for me, as the primary investigator on the paper is Uri Alon, a relatively famous scientist in the field of systems biology and author of the great introductory book An Introduction to Systems Biology: Design Principles of Biological Circuits , a book I happened to cite in my thesis :-).

In a previous paper-a-month post, I mentioned that scientists tend to assume proteins follow a basic first-order degradation relationship, where the higher the level of protein, the faster the proteins are cleared out. This gives a relationship which is not unlike the relationship you get with radioactive isotopes: they have half-lives where after a certain amount of time, half of the previous quantity is consumed. Within a cell, there are two ways for proteins to be cleared away in this fashion: either the cell grows/splits (so the same amount of protein has to be “shared” by more space – i.e. dilution) or the cell’s internal protein “recycling” machinery actively destroys the proteins. (i.e. degradation)

This paper tried to study this by developing a fairly ingenious experimental method, as described in Figure 1B (below). The basic idea is to use well-understood genetic techniques to introduce the proteins of interest tagged with fluorescent markers (like YFP = Yellow Fluorescent Protein) which will glow if subject to the right frequency of light. The researchers would then separate a sample of cells with the tagged proteins into two groups. One group would be the control (duh, what else do scientists do when they have two groups), and one would be subject to photobleachingwhere fluorescent proteins lose their ability to glow over time if they are continuously excited. The result, hopefully, is one group of cells where the level of fluorescence is a balance between protein creation and destruction (the control) and one group of cells where the level of fluorescence stems from the creation of new fluorescently tagged proteins. Subtract the two, and you should get a decent indicator of the rate of protein destruction within a cell.

image

But, how do you figure out whether or not the degradation is caused by dilution or degradation? Simple, if you know the rate at which the cells divide (or can control the cells to divide at a certain rate), then you effectively know the rate of dilution. Subtract that from the total and you have the rate of degradation! The results for a broad swatch of proteins is shown below in Figure 2, panels E & F, which show the ratio of the rate of dilution to rate of degradation for a number of proteins and classifies them by which is the biggest factor (those in brown are where degradation is much higher and hence they have a shorter half-life, those in blue are where dilution is much higher and hence they have a longer half-life, and those in gray are somewhere in between).image

I was definitely very impressed with the creativity of the assay method and their ability to get data which matched up relatively closely (~10-20% error) with the “gold standard” method (which requires radioactivity and a complex set of antibodies), I was frankly disappointed by the main thrust of the paper. Cool assays don’t mean much if you’re not answering interesting questions. To me, the most interesting questions would be more functional: why do some proteins have longer or shorter half-lives? Why are some of their half-lives more dependent on one thing than the other? Do these half-lives change? If so, what causes them to change? What functionally determines whether a protein will be degraded easily versus not?

Instead, the study’s authors seemed to lose their creative spark shortly after creating their assay. They wound up trying to rationalize that which was already pretty self-evident to me:

  • If you a stress a cell, you make it divide more slowly
  • Proteins which have slow degradation rates tend to have longer half-lives
  • Proteins which have slow degradation rates will have even longer half-lives when you get cells to stop dividing (because you eliminate the dilution)
  • Therefore, if you stress a cell, proteins which have the longest half-lives will have even longer half-lives

Now, this is a worthy finding – but given the high esteem of a journal like Science and the very cool assay they developed, it seemed a bit anti-climactic. Regardless, I hope this was just the first in a long line of papers using this particular assay to understand biological phenomena.

(Figures from paper)

Paper: Eden et al. “Proteome Half-Life Dynamics in Living Human Cells.” Science 331 (Feb 2011) – doi: 10.1126/science.1199784

One Comment

Degradation Situation

Look at me, just third to last month on this paper-of-the-month thing, and I’m over 2 weeks late on this.

This month’s paper goes into something that is very near and dear to my heart – the application of math/models to biological systems (more commonly referred to as “Systems Biology”). The most interesting thing about Systems Biology to me is the contrast between the very physics-like approach it takes to biology – it immediately tries to approximate “pieces” of biology as very basic equations – and the relative lack of good quantitative data in biology to validate those models.

The paper I picked for this month bridges this gap and looks at a system with what I would consider to be probably the most basic systems biology equation/relationship possible: first-order degradation.

The biological system in question is very hot in biology: RNA-induced silencing. For those of you not in the know, RNA-induced silencing refers to the fact that short RNAs could act as more than just the “messenger” between the information encoded in DNA and the rest of the cell, but also as a regulator of other “messenger” RNAs, resulting in their destruction. This process not only let Craig Mello and Andrew Fire win a Nobel prize, but it became a powerful tool for scientists to study living cells (by selectively shutting down certain RNA “messages” with short RNA molecules) and has even been touted as a potential future medicine.

But, one thing scientists have noticed about RNA-induced silencing is that it how well it works depends on the RNA that it is trying to silence. For some genes, RNA-induced silencing works super-effectively. For others, RNA-induced silencing does a miserable job. Why?

While there are a number of factors at play, the Systems Biologist/physicist would probably go to the chalkboard and start with a simple equation. After all, logic would suggest that the amount of a given RNA in a cell is related to a) how quickly the RNA is being destroyed and b) how quickly the RNA is being created. If you write out the equation and make a few simplifying assumptions (that the rate the particular RNA is being created was relatively constant and that the rate at which a particular RNA was destroyed was proportional to the amount of RNA that is there), then you get a first-order degradation equation which has a few easy-to-understand properties:

  • The higher the speed of RNA creation, the higher the amount of RNA you would expect when the cell was “at balance”
  • The faster the rate at which RNA is destroyed, the lower the “balance” amount of RNA
  • The amount of “at balance” RNA is actually the ratio of the speed of RNA creation to the speed of RNA destruction
  • There are many possible values of RNA creation/destruction rates which could result in a particular “at balance” RNA level

And of course, you can’t forget the kicker:

  • When the rate of RNA creation/destruction is higher, the “at balance” amount of RNA is more stable

Intuitively, this makes sense. If I keep pouring water into a leaky bathtub, the amount of water in the bathtub is likely to be more stable if the rate of water coming in and the rate of water leaking out are both extremely high, because then small changes in leak rate or the water flow won’t have such a big impact. But, intuition and a super-simple equation don’t prove anything. We need data to bore this out.

And, data we have. The next two charts come from Figure 3 and highlight the controlled experiment the researchers set up. The researchers took luciferase, which is a firefly gene which glows in the dark, and tacked on 0, 3, 5, or 7 repeats of a short gene sequence which increases the speed at which the corresponding messenger RNA is destroyed to set up the experiment. You can see below that the brightness for the luciferase gene with 7 of these repeats is able to only produce 40% of the light of the “natural” luciferase, suggesting that the procedure worked – that we have created artificial genes which work the same but which degrade faster!

image

So, we have our test messenger RNA’s. Moment of truth: let’s take a look at what happens to the luciferase activity after we subject them to RNA-induced silencing. From Figure 3C:

image

The chart above shows that the same RNA-induced silencing is much more effective at shutting down “natural” luciferase than luciferase which has been modified to be destroyed faster.

But what about genes other than luciferase? Do those still work too? The researchers applied microarray technology (which allows you to measure at different points in time the amount of almost any specific RNA you may be interested in) to study both the “natural” degradation rate of RNA and the impact of RNA-induced silencing. This chart on the left from Figure 4C shows a weak, albeit distinct positive relationship between rate of RNA destruction (the “specific decay rate”) and resistance to RNA-induced silencing (the “best-achieved mRNA ratio”).

imageimage

The chart on the right from figure 5A shows the results of another set of experiments with HeLa cells (a common lab cell line). In this case, genes that had RNAs with a long half-life (a slow degradation rate) were the most likely to be extremely susceptible to RNA-induced silencing [green bar], whereas genes with short half-life RNAs (fast degradation rate) were the least likely to be extremely susceptible [red bar].

This was a very interesting study which really made me nostalgic and, I think, provided some interesting evidence for the simple first-order degradation model. However, the results were not as strong as one would have hoped. Take the chart from Figure 5A – although there is clearly a difference between the green, yellow, and red bars, the point has to be made using somewhat arbitrary/odd categorizations: instead of just showing me how decay rate corresponds to the relative impact on RNA levels from RNA-induced silence, they concocted some bizarre measures of “long”, “medium”, and “short” half-lives and “fraction [of genes which, in response to RNA-induced silencing become] strongly repressed”. It suggests to me that the data was actually very noisy and didn’t paint the clear picture that the researchers had hoped.

That noise was probably a product of the fact that RNA levels are regulated by many different things, which is not the researcher’s fault. But, what the researchers could have done better, however, was quantify and/or rule out the impact of those other factors on the results we noticed using a combination of quantitative analysis and controlled experiments.

Those criticisms aside, I think the paper was a very cool experimental approach at verifying a systems biology-oriented hypothesis built around quite possibly the first equation that a modern systems biology class would cover!

(Images from Figure 3B, 3C, 4C, and 5A)

Paper: Larsson et al, “mRNA turnover limits siRNA and microRNA efficacy.” Molecular Systems Biology 6:433 (Nov 2010) – doi:10.1038/msb.2010.89

5 Comments

The Ran gradient

Although my fascination with mathematical/systems biology did not kick in until relatively late in my education, I was encouraged by Professor Michael Brenner to pursue additional research beyond my limited coursework in the field. The project that we worked on involved using the technique of dominant balances to simplify the mathematical model built by the Macara group to probe the role of the Ran gradient in nuclear import. Although this research was never completed, it afforded me a rare opportunity to apply some simple Applied Mathematics reasoning to a very “traditional” biological problem and to arrive at some interesting conclusions.

Applying Dominant Balances to a Model of Nuclear Import

Abstract:

Mathematical models of complex biological systems are oftentimes exceedingly complex despite relatively simple system dynamics. This report investigates one complex model of protein nuclear import, a process which has been experimentally shown to depend on the large concentration gradient of Ran-GTP between the nucleus and cytoplasm. It shows that the method of dominant balances can be used to sel ectively study the system’s behavior, including determining a simple, yet accurate relationship which yields the numerical value of the Ran-GTP gradient.

[PDF]

Comments closed

All Roads Lead to . . .

If you had told me four years ago that I would be working in consulting, I would have responded with a basic question: “What’s consulting? And, why am I doing it?”

As recently as a year ago, I was positive that I would be pursuing a PhD in Systems Biology (or something similar such as Computational Biology or Mathematical Biology). The field was deeply exciting to me. It was (and still is) full of untapped potential. I spoke eagerly with professors Erin O’Shea and Michael Brenner about how I could prepare myself and what I could study. Having worked in the lab of professor Tom Maniatis for almost two years at that point, and having been exposed to the joys of doing collaborative scientific work, I was fairly certain that being a graduate student doing research full-time was what I wanted.

With almost a sense of smugness, I looked down at the more “business-y types”. I thought what they were doing lacked rigor, and was hence not worthy of my time. I believed it was mere mental child’s play compared to the rigor and intellectual excitement of trying to decode complex gene networks and how invisible molecules could determine whether we were healthy or sick.

So what happened? Well, I can think of four main reasons. The first and most immediate was that I was part of the organizing committee behind the 2006 Harvard College Asian Business Forum, which was the HPAIR (Harvard Project for Asian and International Relations) business conference. The experience was very rewarding and eye-opening, but more than that, it was an impetus to follow the paths of the many excited delegates, many of whom were early professionals looking into business jobs like consulting and finance.

The second factor was a growing awareness of what life in academia meant. Yes, I was well aware of the struggles that junior academics had to go through on their way towards tenured faculty. But at the same time, towards the end of the summer, with several experiments facing  setbacks and the doubts in my mind over my ability to be a good researcher, I began looking to other alternatives.

The third consideration stems from the fact that I have always been interested in application. My approach towards science has always been rooted in searching for possible applications, whether commercial or for the public interest. Even the reason that I chose to specialize in Systems Biology stemmed from a belief that traditional molecular and cellular techniques will face sharply diminishing returns with regards to finding the causes and cures for diseases. Having lived almost all of my pre-college life in the Silicon Valley, I was geared to seeing fruiftul science as science that moved from “bench to bedside” and my highest aim was to transition brilliant ideas to profitable ones.

The final factor is of course that it’s always exciting to try something new, especially something competitive — and even though I cursed recruiting at times, it could feel like a fun competition. Although I did not expect to receive a job offer from any firm, I did better than I expected in the interview process and received an offer which I simply found too interesting to turn down.

All roads, at least for me, led to consulting.

2 Comments

CV

Professional Experience

1955 Capital (2016 – ; Bay Area): Principal

1955 Capital is a U.S.-based venture capital firm that invests in technologies from the developed world that can help solve the developing world’s greatest challenges in areas like energy, the environment, food safety, health, education, and manufacturing. 

Yik Yak (2014 – 2015; Atlanta): VP, Product & Business Development

Yik Yak was a venture-backed startup building hyperlocal communities for mobile devices which has raised over $70M in venture funding since its inception.

  • Drove Series B fundraising process leading to a $60M+ financing
  • Built and managed team of 6 PMs, designers, and UX researchers supporting all product and systems development within company
  • Implemented analytics systems and helped build culture around data and experimentation
  • Guided roadmap and implementation of over 1 year of releases including multiple key features (phone number verification, reply icons, notification center, My Herd, sharing, web
  • Handled all preliminary partnership conversations and executed on partnerships with University of Florida, MTV, Comedy Central, and Bleacher Report

DCM (2010 – 2014;  Bay Area): VP, Investments

DCM is a leading early stage venture capital fund based in the Silicon Valley, Beijing, and Tokyo, managing over $3B in assets.

  • Generalist within firm with emphasis on opportunities in frontier technologies, infrastructure tech, connected platforms, and new models for healthcare
  • Sourced and closed three investments (all of which raised follow-on financing): Augmedix, Athos, and Yik Yak
  • Work directly with management at number of DCM companies (including 1Mainstream, Arrayent, Athos, Augmedix, Cognitive Networks, Enovix, FreedomPop, Rayvio, Stride Health, and Yik Yak) on product marketing, fundraising, financial planning, business development, hiring, and defining strategy
  • Built software tools to facilitate internal voting process and programmatically parse data from AppAnnie (mobile app store rankings data vendor) and Crunchbase (online repository of venture financings)

Bain & Company (2007 – 2010;  Bay Area): Senior Associate Consultant

Bain & Company is a leading management consulting firm.

  • Performed competitive benchmarking to compare a client’s manufacturing operations with best-in-class contract manufacturers
  • Analyzed technology industry profitability to identify attractive growth vectors for large tech client and highlight key differences between profit/power concentration in different verticals
  • Conducted financial and strategic diligence on wide range of potential acquisition targets (ranging in value from ~$100M to over $50B) to map out strategic acquisition options and gameboarding scenarios for a Fortune 500 technology client
  • Facilitated concept development and pilot phase of client initiative to provide operational support services for supply chain
  • Provided strategic and financial analysis to aid multiple clients in determining appropriate response to potentially disruptive trends; topics covered include cloud computing, mobile commerce, next-generation semiconductor manufacturing, cross-border eCommerce, social networking, etc.
  • Devised presentation for CEO-level conversation on a process to pro-actively acquire/partner with assets which can aid client in dealing with disruptive innovations
  • Overhauled Bain toolkit for codification in new book by Bain partners Mark Gottfredson and Steve Schaubert, The Breakthrough Imperative
  • Consistently received strong reviews with ratings of “frequently exceeds expectations” and “upward trajectory”; received offer to return along with offer of sponsorship for future graduate studies

Roche Pharmaceuticals (2005; Palo Alto, CA): Research Intern, Drug Metabolism & Pharmacokinetics

Roche Pharmaceuticals is a leading pharmaceutical company

  • Validated use of Isothermal Titration Calorimeter in enzyme kinetics studies
  • Assessed factors limiting application of approach to study of Cytochrome P450 enzyme system
  • Presented findings at department seminar

Abgenix Corporation (2003; Fremont, CA): Intern, Process Sciences

Abgenix was a biotechnology company focused on humanized antibody therapeutics which was acquired by Amgen in 2006

  • Performed optimization studies for ELISA protocols used by Abgenix’s Process Sciences division
  • Presented findings at department seminar

Projects

Xhibitr (2007 – 2009; Bay Area)

Xhibitr was a fashion-oriented social network (see more detail)

  • Provided organizational structure for team and developed long-term plan
  • Primary responsibility for HTML/CSS and aspects of front-end interface development
  • Presented initial design with co-founder team at Stanford’s 2009 Cool Products Expo

Academic Research

Maniatis Group (2004 – 2007; Cambridge, MA): Harvard University Department of Molecular and Cellular Biology

Completed senior thesis “Transcriptional Regulation of Members of the Tripartite Motif Family” (linkreport PDF) in lab of Professor Tom Maniatis.

Brenner Group (2006 – 2007; Cambridge, MA): Harvard University Division of Engineering and Applied Sciences

Worked with Professor Michael P. Brenner on applying separation of timescales and dominant balances towards simplifying complex biological math models, specifically with regards to what sets the Ran gradient in nuclear transport (linkreport PDF).

Brown Group (Summer 2004; Stanford, CA): Stanford Medical School Center for Clinical Science Research

Worked in the lab of Professor Janice (Wes) M. Y. Brown researching stem cell reconstitution of murine immune systems in conjunction with antifungal agents and combinations of lymphoid and myeloid progenitor cells. Research summarized in: (Arber C, et al. Journal of Infectious Diseases [2005])

Education

Harvard University (2003 – 2007; Cambridge, MA): A. B. Magna Cum Laude with Highest Honors in Biochemical Sciences, Secondary Field in Mathematical Sciences

  • Honors: Phi Beta Kappa, Dean’s Research Award (2006), Harvard College Research Program Award (2006), Harvard College Scholar (2005-2006), Tylenol Scholar (2005-2006)
  • Thesis: Transcriptional Regulation of Members of the Tripartite Motif Family in Response to Viral Infection (link)
  • Selected Activities

Skills/Others

Comments closed

HotChips 101

image This post is almost a week overdue thanks to a hectic work week. In any event, I spent last Monday and Tuesday immersed in the high performance chip world at the 2009 HotChips conference.

Now, full disclosure: I am not electrical engineer, nor was I even formally trained in computer science. At best, I can “understand” a technical presentation in a manner akin to how my high school biology teacher explained his “understanding” of the Chinese language: “I know enough to get in trouble.”
But despite all of that, I was given a rare look at a world that few non-engineers ever get to see, and yet it is one which has a dramatic impact on the technology sector given the importance of these cutting-edge chip technologies in computers, mobile phones, and consumer electronics.

And, here’s my business strategy/non-expert enthusiast view of six of the big highlights I took away from the conference and which best inform technology strategy:

  1. image We are 5-10 years behind on the software development technology needed to truly get performance power out of our new chips. Over the last decade, computer chip companies discovered that simply ramping up clock speeds (the Megahertz/Gigahertz number that everyone talks about when describing how fast a chip is) was not going to cut it as a way of improving computer performance (because of power consumption and heat issues). As a result, instead of making the cores (the processing engines) on a chip faster, chip companies like Intel resorted to adding more cores to each chip. The problem with this approach is that performance becomes highly dependent on software developers being able to create software which can figure out how to separate tasks across multiple cores and share resources effectively between them – something which is “one of the hardest if not the hardest systems challenge that we as an industry have ever face” (courtesy of UC Berkeley professor Dave Patterson). The result? Chip designers like Intel may innovate to the moon, but unless software techniques catch up, we won’t get to see any of that. Is it no wonder, then, that Intel bought multi-core software technology company RapidMind or that other chip designers like IBM and Sun are so heavily committed to creating software products to help developers make use of their chips? (Note: the image to the right is an Apple ad of an Intel bunny suit smoked by the PowerPC chip technology that they used to use)
  2. Computer performance may become more dependent on chip accelerator technologies. The traditional performance “engine” of a computer was the CPU, a product which has made the likes of Intel and IBM fabulously wealthy. But, the CPU is a general-purpose “engine” – a jack of all trades, but a master of none. In response to this, companies like NVIDIA, led by HotChips keynote speaker Jen-Hsun Huang, have begun pushing graphics chips (GPUs), traditionally used for gaming or editing movies, as specialized engines for computing power. I’ve discussed this a number of times over at the Bench Press blog, but the basic idea is that instead of using the jack-of-all-trades-and-master-of-none CPU, a system should use specialized chips to address specialized needs. Because a lot of computing power is burnt doing work that is heavy on the mathematical tasks that a GPU is suited to do, or the signal processing work that a digital signal processor might be better at, or the cryptography work that a cryptography accelerator is better suited for, this opens the doorway to the use of other chip technologies in our computers. NVIDIA’s GPU solution is one of the most mature, as they’ve spent a number of years developing a solution they call CUDA, but there was definitely a clear message: as the performance that we care about becomes more and more specialized (like graphics or number crunching or security), special chip accelerators will become more and more important.
    image
  3. Designing high-speed chips is now less and less about “chip speed” and more and more about memory and input/output. An interesting blog post by Gustavo Duarte highlighted something very fascinating to me: your CPU spends most of its time waiting for things to do. So much time, in fact, that the best way to speed up your chip is not to speed up your processing engine, but to speed up getting tasks into your chip’s processing cores. The biological analogy to this is something called a perfect enzyme – an enzyme that works so fast that its speed is limited by how quickly it can get ahold of things to work on. As a result, every chip presentation spent ~2/3 of the time talking about managing memory (where the chip stores the instructions it will work on) and managing how quickly instructions from the outside (like from your keyboard) get to the chip’s processing cores. In fact, one of the IBM POWER7 presentations spent almost the entire time discussing the POWER7’s use and management of embedded DRAM technology to speed up how quickly tasks can get to the processing cores.
  4. Moore’s Law may no longer be as generous as it used to be. I mentioned before that one of the big “facts of life” in the technology space is the ability of the next product to be cheaper, faster, and better than the last – something I attributed to Moore’s Law (an observation that chip technology doubles in capability every ~2 years). At HotChips, there was a imagefascinating panel discussing the future of Moore’s Law, mainly asking the question of (a) will Moore’s Law continue to deliver benefits and (b) what happens if it stops? The answers were not very uplifting. While there was a wide range of opinions on how much we’d be able to squeeze out of Moore’s Law going forward, there was broad consensus that the days of just letting Moore’s Law lower your costs, reduce your energy bill, and increase your performance simultaneously were over. The amount of money it costs to design next-generation chips has grown exponentially (one panelist cited a cost of $60 million just to start a new custom project), and the amount of money it costs to operate a semiconductor factory have skyrocketed into the billions. And, as one panelist put it, constantly riding the Moore’s Law technology wave has forced the industry to rely on “tricks” which reduced the delivery of all the benefits that Moore’s Law was typically able to bring about. The panelists warned that future chip innovations were going to be driven more and more by design and software rather than blindly following Moore’s Law and that unless new ways to develop chips emerged, the chip industry itself could find itself slowing its progress.
  5. Power management is top of mind. The second keynote speaker, EA Chief Creative Officer Richard Hilleman noted something which gave me significant pause. He said that in 2009, China will probably produce more electric cars in one year than have ever been produced in all of history. The impact to the electronics industry? It will soon be very hard to find and imagevery expensive to buy batteries. This, coupled with the desires of consumers everywhere to have longer battery lives for their computers, phones, and devices means that managing power consumption is critical for chip designers. In each presentation I watched, I saw the designers roll out a number of power management techniques – the most amusing of which was employed by IBM’s new POWER7 uber-chip. The POWER7 could implement four different low-power modes (so that the system could tune its power consumption), which were humorously named: doze, nap, sleep, and “Rip van Winkle”.
  6. Chip designers can no longer just build “the latest and greatest”. There used to be one playbook in the Silicon Valley – build what you did a year ago, but make it faster. That playbook is fast becoming irrelevant. No longer can Silicon Valley just count on people to buy bigger and faster computers to run the latest and greatest applications. Instead, people are choosing to buy cheaper computers to run Facebook and Gmail, which, while interesting and useful, no longer need the CPU or monitor with the greatest “digital horsepower.” EA’s Richard Hilleman noted that this trend was especially important in the gaming indimageustry. Where before, the gaming industry focused on hardcore gamers who spent hours and hours building their systems and playing immersive games, today, the industry is keen on building games with clever mechanics (e.g. a Guitar Hero or a game for the Nintendo Wii) for people with short attention spans who aren’t willing to spend hours holed up in front of their televisions. Instead of focusing on pure graphical horsepower, gaming companies today want to build games which can be social experiences (like World of Warcraft) or which can be played across many devices (like smartphones or over social networks). With stores like Gamestop on the rise, gaming companies can no longer count on just selling games, they need to think up how to sell “virtual goods” (like upgrades to your character/weapons) or in-game advertising (a Coke billboard in your game?) or encourage users to subscribe. What this all means is that, to stay relevant, technology companies can no longer just gamble on their ability to make yesterday’s product faster, they have to make them better too.

There was a lot more that happened at HotChips than I can describe here (and I skipped over a lot of the more techy details), but those were six of the most interesting messages that I left the conference with, and I am wondering if I can get my firm to pay for another trip next year!

Oh, and just to brag, while at HotChips, I got to check out a demo of the potential blockbuster game Batman: Arkham Asylum while checking out NVIDIA’s 3D Vision product! And I have to say, I’m very impressed by both products – and am now very tempted by NVIDIA’s Buy a GeForce card, get Batman: Arkham Asylum free offer.

(Image credit: Intel bunny smoked ad) (Image credit: GPU computing power) (Image Credit: brick wall) (Image – Rip Van Winkle) (Image – World of Warcraft box art)

One Comment

A case fit for House MD

imageI’m a huge fan of the show House MD. In particular, I love the show’s use of incredibly bizarre, but true medical cases. For example, in one of the earlier shows, House and his team make a diagnosis based on the fact that sleeping sickness can be transmitted by sexual contact. That may sound like nothing extraordinary, until it becomes emphasized that this medical “fact” is actually one reported case in a foreign medical journal. Probable? No. But, fake? Not really.

Unfortunately, consulting does not leave much time in my day for keeping up with scientific papers. I end up accumulating a pile of papers to read which just never seems to shrink. However, I was recently shook from my paper-reading stupor when A. Phan pointed me to one particularly interesting study published in the most recent issue of the New England Journal of Medicine. The AFP article which summarizes the study is simply jaw-dropping:

Girl switches blood type after liver transplant

The medical study details the struggles of a 9 year old Australian girl who needed a liver transplant due to a case of “non-A-to-G hepatitis” (translation: doctors know that something serious is hitting the liver, but they have no clear idea what it is). She is given a liver transplant from a 12 year old boy who died of hypoxia (lack of oxygen to the brain) and is positive for a normally innocuous virus called CMV (cytomegalovirus). The match is nowhere near perfect, so the girl is treated with immunosuppressants to prevent rejection.

Unfortunately, while CMV is normally harmless, it can cause problems in patients with weakened immune systems. Sure enough, the girl had to be re-admitted to the hospital 2 weeks after being discharged. Her doctors noted that the severe lymphopenia (a shortage of the blood cells needed to fight infection) that was ailing the girl prior to the transplant had persisted even 5 weeks after the transplant. The doctors had simply thought this was a combination of infection and the immunosuppressants they were giving her, so they adjusted the medication they gave her.

7-8 months after that (9 months post-liver transplant), the girl was re-admitted to the hospital for surgery due to a bowel obstruction, and it was then that they noticed that the patient’s blood, which had previously been type O-negative, had tested O-positive! This was especially incredible given that both parents were homozygous O-negative, meaning that there was no way, genetically, that the girl could produce O-positive blood. Typically, the only way a blood type switch happen is through a bone marrow transplant, which replaces the blood-making cells of our bodies with the blood-making cells from a donor — and even then, it’s accompanied by something which the girl did not suffer from called GVHD (Graft-Versus-Host Disease), where the new donor immune system thinks that the recipient’s entire body is foreign, and should thus be attacked.

image A month after that (10 months post-liver transplant), after a mild respiratory tract infection (a cold or cough), the girl started showing signs of hemolytic anemia. Literally, her blood cells were bursting — something you would expect in blood type mismatch problems. Heavy immunosuppressive therapy and constant transfusions seemed only to alleviate the problem slightly. A careful examination of her blood showed that her immune cells were more than 90% from the donor, something which was verified not only by blood type, but also by the fact that these cells had Y chromosomes (results from fluorescence in-situ hybridization to the right; red dot is a Y chromosome; green dot is a X chromosome; the cell at the top is thus XX — female — and the cell at the bottom right is XY — female).

In words that President George W. Bush might understand, the donor’s new blood cells are US forces in Iraq. The remaining blood cells from the girl are scared Iraqi’s who see strangers everywhere and are prone to using guns. The hemolytic anemia is the result of the ensuing fighting. And the immunosuppresants are some magical way (maybe supplying both sides with alcohol?) to reduce the ability of both sides to fight.

The doctors had a choice. Do they:

  1. Give her a drug to wipe out a big chunk of the immune cells from both donor and recipient (nuke Iraq to kill enough people, on both sides, to stop the war)
  2. Stop all immunosuppressants and just let the immune cells duke it out (take off all the handcuffs on US forces and let them wipe out the remaining Iraqi insurgents and hope that Iraq is still in one piece when it’s all over)

They went with the second strategy.

It is now about 5 years after the transplant. The girl is healthy, and no longer on immunosuppressants. Her blood is now completely from the donor, despite the lack of bone marrow transplant. There has been no sign of the GVHD which typically accompanies the sorts of bone marrow transplants which could lead to blood type switching, and it would appear that the girl’s new immune system has been “re-trained” to not recognize the liver or the girl’s body as foreign.

So, the big question in my mind is how? How could a non-bone marrow transplant lead to blood type switching? The only two things I can think of are:

  1. A virus caused liver cells from the transplant to fuse with the girl’s blood-making bone marrow cells. Why it may be possible:
    1. In biology labs, forcing cells to fuse is oftentimes done with viruses
    2. It is known that stem cells like the blood-making bone marrow cells are prone to fusing (a result which confused many early researchers who were positive they found examples of blood stem cells turning into non-blood cells)
  2. Because the boy was so young, it is possible that the transplanted liver still contained blood-making stem cells which were re-activated. Why it may be possible:
    1. The fetal blood supply is produced, at least in part, by cells in the liver
    2. Stem cells which are dormant (e.g. the cells in your skin that can produce new skin) can be activated with the appropriate stimuli (e.g. burn)

This is all just speculation on my part, and I doubt we will ever find the answer in the case of this patient (who is no doubt sick of doctors and hospitals), but things like this are reasons why I love House and why I love science.

2 Comments

Can't find what you're looking for? Try refining your search:

%d bloggers like this: