Skip to content →

Tag: systems biology

The Not So Secret (Half) Lives of Proteins

Ok, so I’m a little behind on my goal from last month to pump out my Paper a Month posts ahead of time, but better late than never!

This month’s paper comes once again from Science, as recommended to me by my old college roommate Eric. It was a pertinent recommendation for me, as the primary investigator on the paper is Uri Alon, a relatively famous scientist in the field of systems biology and author of the great introductory book An Introduction to Systems Biology: Design Principles of Biological Circuits , a book I happened to cite in my thesis :-).

In a previous paper-a-month post, I mentioned that scientists tend to assume proteins follow a basic first-order degradation relationship, where the higher the level of protein, the faster the proteins are cleared out. This gives a relationship which is not unlike the relationship you get with radioactive isotopes: they have half-lives where after a certain amount of time, half of the previous quantity is consumed. Within a cell, there are two ways for proteins to be cleared away in this fashion: either the cell grows/splits (so the same amount of protein has to be “shared” by more space – i.e. dilution) or the cell’s internal protein “recycling” machinery actively destroys the proteins. (i.e. degradation)

This paper tried to study this by developing a fairly ingenious experimental method, as described in Figure 1B (below). The basic idea is to use well-understood genetic techniques to introduce the proteins of interest tagged with fluorescent markers (like YFP = Yellow Fluorescent Protein) which will glow if subject to the right frequency of light. The researchers would then separate a sample of cells with the tagged proteins into two groups. One group would be the control (duh, what else do scientists do when they have two groups), and one would be subject to photobleachingwhere fluorescent proteins lose their ability to glow over time if they are continuously excited. The result, hopefully, is one group of cells where the level of fluorescence is a balance between protein creation and destruction (the control) and one group of cells where the level of fluorescence stems from the creation of new fluorescently tagged proteins. Subtract the two, and you should get a decent indicator of the rate of protein destruction within a cell.

image

But, how do you figure out whether or not the degradation is caused by dilution or degradation? Simple, if you know the rate at which the cells divide (or can control the cells to divide at a certain rate), then you effectively know the rate of dilution. Subtract that from the total and you have the rate of degradation! The results for a broad swatch of proteins is shown below in Figure 2, panels E & F, which show the ratio of the rate of dilution to rate of degradation for a number of proteins and classifies them by which is the biggest factor (those in brown are where degradation is much higher and hence they have a shorter half-life, those in blue are where dilution is much higher and hence they have a longer half-life, and those in gray are somewhere in between).image

I was definitely very impressed with the creativity of the assay method and their ability to get data which matched up relatively closely (~10-20% error) with the “gold standard” method (which requires radioactivity and a complex set of antibodies), I was frankly disappointed by the main thrust of the paper. Cool assays don’t mean much if you’re not answering interesting questions. To me, the most interesting questions would be more functional: why do some proteins have longer or shorter half-lives? Why are some of their half-lives more dependent on one thing than the other? Do these half-lives change? If so, what causes them to change? What functionally determines whether a protein will be degraded easily versus not?

Instead, the study’s authors seemed to lose their creative spark shortly after creating their assay. They wound up trying to rationalize that which was already pretty self-evident to me:

  • If you a stress a cell, you make it divide more slowly
  • Proteins which have slow degradation rates tend to have longer half-lives
  • Proteins which have slow degradation rates will have even longer half-lives when you get cells to stop dividing (because you eliminate the dilution)
  • Therefore, if you stress a cell, proteins which have the longest half-lives will have even longer half-lives

Now, this is a worthy finding – but given the high esteem of a journal like Science and the very cool assay they developed, it seemed a bit anti-climactic. Regardless, I hope this was just the first in a long line of papers using this particular assay to understand biological phenomena.

(Figures from paper)

Paper: Eden et al. “Proteome Half-Life Dynamics in Living Human Cells.” Science 331 (Feb 2011) – doi: 10.1126/science.1199784

One Comment

Degradation Situation

Look at me, just third to last month on this paper-of-the-month thing, and I’m over 2 weeks late on this.

This month’s paper goes into something that is very near and dear to my heart – the application of math/models to biological systems (more commonly referred to as “Systems Biology”). The most interesting thing about Systems Biology to me is the contrast between the very physics-like approach it takes to biology – it immediately tries to approximate “pieces” of biology as very basic equations – and the relative lack of good quantitative data in biology to validate those models.

The paper I picked for this month bridges this gap and looks at a system with what I would consider to be probably the most basic systems biology equation/relationship possible: first-order degradation.

The biological system in question is very hot in biology: RNA-induced silencing. For those of you not in the know, RNA-induced silencing refers to the fact that short RNAs could act as more than just the “messenger” between the information encoded in DNA and the rest of the cell, but also as a regulator of other “messenger” RNAs, resulting in their destruction. This process not only let Craig Mello and Andrew Fire win a Nobel prize, but it became a powerful tool for scientists to study living cells (by selectively shutting down certain RNA “messages” with short RNA molecules) and has even been touted as a potential future medicine.

But, one thing scientists have noticed about RNA-induced silencing is that it how well it works depends on the RNA that it is trying to silence. For some genes, RNA-induced silencing works super-effectively. For others, RNA-induced silencing does a miserable job. Why?

While there are a number of factors at play, the Systems Biologist/physicist would probably go to the chalkboard and start with a simple equation. After all, logic would suggest that the amount of a given RNA in a cell is related to a) how quickly the RNA is being destroyed and b) how quickly the RNA is being created. If you write out the equation and make a few simplifying assumptions (that the rate the particular RNA is being created was relatively constant and that the rate at which a particular RNA was destroyed was proportional to the amount of RNA that is there), then you get a first-order degradation equation which has a few easy-to-understand properties:

  • The higher the speed of RNA creation, the higher the amount of RNA you would expect when the cell was “at balance”
  • The faster the rate at which RNA is destroyed, the lower the “balance” amount of RNA
  • The amount of “at balance” RNA is actually the ratio of the speed of RNA creation to the speed of RNA destruction
  • There are many possible values of RNA creation/destruction rates which could result in a particular “at balance” RNA level

And of course, you can’t forget the kicker:

  • When the rate of RNA creation/destruction is higher, the “at balance” amount of RNA is more stable

Intuitively, this makes sense. If I keep pouring water into a leaky bathtub, the amount of water in the bathtub is likely to be more stable if the rate of water coming in and the rate of water leaking out are both extremely high, because then small changes in leak rate or the water flow won’t have such a big impact. But, intuition and a super-simple equation don’t prove anything. We need data to bore this out.

And, data we have. The next two charts come from Figure 3 and highlight the controlled experiment the researchers set up. The researchers took luciferase, which is a firefly gene which glows in the dark, and tacked on 0, 3, 5, or 7 repeats of a short gene sequence which increases the speed at which the corresponding messenger RNA is destroyed to set up the experiment. You can see below that the brightness for the luciferase gene with 7 of these repeats is able to only produce 40% of the light of the “natural” luciferase, suggesting that the procedure worked – that we have created artificial genes which work the same but which degrade faster!

image

So, we have our test messenger RNA’s. Moment of truth: let’s take a look at what happens to the luciferase activity after we subject them to RNA-induced silencing. From Figure 3C:

image

The chart above shows that the same RNA-induced silencing is much more effective at shutting down “natural” luciferase than luciferase which has been modified to be destroyed faster.

But what about genes other than luciferase? Do those still work too? The researchers applied microarray technology (which allows you to measure at different points in time the amount of almost any specific RNA you may be interested in) to study both the “natural” degradation rate of RNA and the impact of RNA-induced silencing. This chart on the left from Figure 4C shows a weak, albeit distinct positive relationship between rate of RNA destruction (the “specific decay rate”) and resistance to RNA-induced silencing (the “best-achieved mRNA ratio”).

imageimage

The chart on the right from figure 5A shows the results of another set of experiments with HeLa cells (a common lab cell line). In this case, genes that had RNAs with a long half-life (a slow degradation rate) were the most likely to be extremely susceptible to RNA-induced silencing [green bar], whereas genes with short half-life RNAs (fast degradation rate) were the least likely to be extremely susceptible [red bar].

This was a very interesting study which really made me nostalgic and, I think, provided some interesting evidence for the simple first-order degradation model. However, the results were not as strong as one would have hoped. Take the chart from Figure 5A – although there is clearly a difference between the green, yellow, and red bars, the point has to be made using somewhat arbitrary/odd categorizations: instead of just showing me how decay rate corresponds to the relative impact on RNA levels from RNA-induced silence, they concocted some bizarre measures of “long”, “medium”, and “short” half-lives and “fraction [of genes which, in response to RNA-induced silencing become] strongly repressed”. It suggests to me that the data was actually very noisy and didn’t paint the clear picture that the researchers had hoped.

That noise was probably a product of the fact that RNA levels are regulated by many different things, which is not the researcher’s fault. But, what the researchers could have done better, however, was quantify and/or rule out the impact of those other factors on the results we noticed using a combination of quantitative analysis and controlled experiments.

Those criticisms aside, I think the paper was a very cool experimental approach at verifying a systems biology-oriented hypothesis built around quite possibly the first equation that a modern systems biology class would cover!

(Images from Figure 3B, 3C, 4C, and 5A)

Paper: Larsson et al, “mRNA turnover limits siRNA and microRNA efficacy.” Molecular Systems Biology 6:433 (Nov 2010) – doi:10.1038/msb.2010.89

5 Comments
%d bloggers like this: