Saturday, December 08, 2007

On Rubber Bands, Entropic Springs and Elastomers

Let us begin by doing a Gedanken experiment. Or you could literally do it if you have the ingredients. Imagine a ring made of some thin wire or may be a regular piece of string. And now imagine stretching it between you fingers. What would happen? It would just become stiff and you cannot do much more. Now imagine doing the same experiment with a rubber band. You will be able to extend the rubber band to at least a few times its original size. And if you let it go, it goes right back to its original shape and size. So, the experiment shows me that rubbery materials can be “deformed reversibly” to many times their original size. This post tries to tell you how and why. Also, in the concluding paragraph, the context of such considerations as applicable to biological systems is mentioned (feel free to skip the section named theory, the rest is sufficient story by itself).

Microscopics of a regular solid

As our starting point, let us consider the elasticity of a regular material, the string or the metal wire in our earlier experiment. What is elasticity anyway? It is the theory that tells you how much force you need to exert in order to deform/extend a material by a given amount. Let us simplify even further. Let us consider a spring attached to a wall and pulling on it. How much force do I need to exert to do this? This is given by the “Hooke’s law” that we all know and love. Let F be the force and x be the extension of the spring from its rest length. Then F=-kx is the statement of Hooke’s law, i.e., the amount of force I need to exert grows linearly with the extension. And the force law is characterized by a single constant, the spring constant of the particular spring I am using.

Now, why did we start there? The reason is that you can get a reasonable theory for the elasticity of a solid if you considered it to be made up of atoms connected by springs (see picture along side), whose spring constants are determined by the electromagnetic interaction among the atoms [1]. What is the typical energy scale of this spring like interaction, i.e., how stiff are these springs? It is about a few electron volts typically, which makes them really stiff springs (but for us to realize that the spring is really stiff, I need to compare this energy scale to something else right? We will come back to this later).

Microscopics of rubber

Note that the picture above is how a string or a wire looks like in the microscopic scale. Next let us ask what does a rubber band look like? [2] It looks like the picture along side. A rubbery material is made of coiled up flexible polymers that are “crosslinked” by some chemical agents (the big black dots in my picture) [3]. For clarity I have caricatured the three polymers shown in the picture with different colors. The message here is that the polymers are in some coiled state. When I stretch such a material, what I am doing is pulling the black dots apart. What will this do to a polymer? It will uncoil it some. You already know that uncoiling a string costs much less energy than trying to pull at a fully extended string in an attempt to lengthen it. This is primarily the difference between rubbery materials and regular crystalline solid. These kinds of solids, to which class the rubber band belongs, have a special name, they are called “elastomers”. Note that I can say that I understand the elasticity of rubber-like materials, if I can get the equivalent of “Hooke’s law” for this system. And for this purpose I need to understand what happens when I pull on a polymer. In the rest of this post, we will try to explain how to describe this theoretically.

The theory

Before we proceed with the theory, let us pause for a moment. How can what I said above be right? If I had a string coiled up on my table and I pull it open to its full length, it does not cost me any energy at all. It comes apart nice and easy. If I were to take the analogy above seriously, the rubber band should not offer me any resistance at all. Pulling at a rubber band must be like pulling on water, it should just come apart. But this is clearly not true. So what did I miss? What I missed is called “entropy”. Suppose my rubber band is at zero temperature (no no, not 0C or 0F but 0K). Then the analogy with the macroscopic string holds and the rubber band should indeed flow like water till the polymers are completely extended. But at all finite temperature, the polymers in rubber are jiggling around with some kinetic energy. And that makes all the difference as we will try to show below.

In order to quantify this notion we need to ask what makes physical systems happy (some of these notions are developed in a slightly different context in this post). Let us consider a regular spring again. If we just let the spring be, it has a characteristic length, let us call this the rest length of the spring. This is the length in which the spring is happiest. Now I pull on the ends of the string. I have to do some work to pull it because I am moving the spring away from the state it is happiest in. This work gets stored in the spring as potential energy. Alright, now let us ask at finite temperatures what is the state in which the polymer is happiest? It is happiest when it has the largest entropy.

What is this entropy? For a polymer, we can understand it as follows. Now suppose you have a coiled up string. It looks like a disc right? The size of this disk is called the “radius of gyration” of the polymer (a measure of the lateral extension of the polymer). Suppose the length of the polymer is L. I ask you, “how many ways can you make an object of length L with it?”, you will tell me, “Exactly one way, stretch the polymer out to its full length”. Similarly, if I wanted to make an object of some length A which is much much smaller than L, again we can do this in exactly one way, namely make a tight coil out of the polymer with each turn of the coil having a radius A. But, if I wanted some intermediate sized object, then I can make it in many many ways, in each of these ways, the polymer will be coiled in a slightly different way. The entropy of a polymer of radius of gyration R is the number of different configurations the polymer can have given this radius as its lateral dimensions. As stated earlier, the polymer wants to have maximum entropy. Hence, from the arguments above it does not want to be fully extended or tightly coiled, but rather be coiled up in some intermediate state. This intermediate state has a size equal to the square root of its length L [4].

So in my unstretched rubber band, I have polymer coils that are happy, i.e., in the state of maximum entropy. Now I pull on the rubber band. What happens? I stretch the polymers. Their radius of gyration increases above the optimal value, their entropy goes down and they are unhappy. So, just like you pay an energy cost to stretch a normal spring, you pay an entropy cost to stretch a polymer. If you calculate this cost, you can derive the equivalent of Hooke’s law for these polymers [5]. Then you find that if I stretch a polymer by an amount x, then the force I need to apply is F=(CT/L)x, where C is just a constant, and T is the temperature of the system. So, a polymer behaves like a spring with a spring constant determined by the temperature of the system! And the reason it behaves like a spring is because of a loss in entropy rather than a gain in energy. This is what people call an “entropic spring”. Now, suppose I compare this spring constant with the spring constant associated with atomic solids we considered earlier, I find that it is 0.00001 times smaller! Thus polymers form very loose springs. And rubber is exactly like a regular solid but with a really itty-bitty spring constant that scales with the temperature of the solid [6].

Conclusion

In summary, rubbers are solids made by crosslinking polymers. Polymers form entropic springs whose stiffness increase as the temperature increases. And this explains why rubber bands become brittle and break when you try to stretch them on hot summer days! But at the start of the post, I said that such considerations are biologically relevant. How is that? The cell wall is a rubber! It is a crosslinked polymer mesh made of polymers that are called actins and microtubules. You will now say to me, why would I want to know about the elasticity of a cell wall? The reason is that it is the elasticity of the cell wall that allows a cell (those that are not swimmers that is) to crawl. And all questions associated with the motility of such cells boils down to understanding the elasticity of the cell wall. And you need to start by understanding the elasticity of the plain old rubber band first!

Jargon, Caveats and Disclaimers

[1] Coulumb, screened coulomb, Lennard-Jones, you take your pick.

[2] I am fudging scales here, mapping Angstroms to a fraction of a micrometer, but for simplicity we ignore this difference here.

[3] This process is called vulcanization, that turns a complex fluid into a solid, gives it a finite zero frequency shear modulus.

[4] You can see this easily if you think of the polymer as a 3D random walk of length L. Then the RMS distance the walker would travel is Square root of L right?

[5] For the experts, note that you can derive this readily. Take a Boltzmann definition for the entropy as S=k_BLogW, where W is the number of configurations of a polymer of radius of gyration R. Using the random walk analogy earlier, this is W=exp(-R^2/L). So the loss in entropy due to stretching must be S(R)-S(R+x). And force F=-T(dS/dx)x (just a standard response relation).

[6] You should ask me now how come I ignored entropy when considering elasticity of an atomic solid. The fact is that entropy strain independent for harmonic solids and plays no role in the elasticity (a brief note on this is here).

Tuesday, November 20, 2007

A layman’s tutorial to the dark side II

In the previous post, we tried to answer the question “What is dark matter?” In this post, in the same reductionist spirit, we try to answer the question “What is dark energy?” [1]. For a number of years, I kept thinking that dark energy was just “E =mc^2” type energy associated with dark matter. It was only in my second year in grad school that I realized how hopelessly wrong I was.

As in the context of dark matter, dark energy is postulated to exist to solve some problems associated with explaining observed phenomena. So, first let us talk about what the problem is. The problem is that the universe is expanding and the rate of this expansion is increasing, i.e., the expansion of the universe is accelerating! The first question you might ask is “Are you sure?” or “How do we know this?” I do not want to discuss red shifts and the Hubble constant here. So I refer you to wikipedia for more info on this. What I would like to do here is to take as a given that the universe is expanding and accelerating and ask how can we understand this?

The theory of classical gravity is General Relativity. A layman’s minimal picture of what GR is with respect to the familiar Newtonian picture can be summarized quickly enough [2]. But, for the purpose at hand, it suffices to say that one of the consequences of GR is that matter and energy exert a pressure on space time much like a gas in a chamber exerts a pressure on the piston [3]. Now, suppose we use this piston analogy for minute. If I have gas under pressure, kept that way by putting a weight on the piston. I suddenly remove this weight and I ask you what you expect the motion of the piston to be like. You would tell me that the piston would first instantaneously accelerate to a large speed, then decelerate slowly as the gas in the chamber expands. Yes? This same picture is what you would expect to apply to the universe as well. You can think of the total mass and energy of the universe as N in some units. Suppose the volume of space time is V(t) at a given time t, then the pressure exerted by this mass and energy will be proportional to N/V. At the time of the big bang, i.e., t=0, this was enclosed in a very very small volume. Hence it must have exerted tremendous pressure and the universe must have expanded rapidly. As time increases, universe expands, V increases, N/V(t) decreases, and so the universe should expand more slowly than before. If this was the case, then there would be no problems and we would not have so many cosmologists so worried so much of the time.

But, observations of far away galaxies tell us that the universe is expanding faster than it was at earlier times! The question is, how can this be? Clearly, it cannot be from the regular mass and energy that we talked about earlier. So we have to think of something else. One of the possible “something else” is that there is an (as yet mysterious) energy associated with space time itself. If this was the case, then as the universe expands, the number of space time points increases in some sense. Then, this intrinsic energy associated with the space time points increases as well and so the pressure builds up and the universe expands faster. So, the existence of such an energy, the dark energy, could be one possible explanation for the accelerating universe. But where the heck does this energy come from? We have no clue at the present time. Hence the name dark energy.

[1] A more complete and erudite discussion is here.

[2] This is fishing. If you ask me, I will tell you kind of thing.

[3] I know that relativist cringe at such statements, but I do not see how else to say this simply.

Friday, November 16, 2007

A layman’s tutorial to the dark side I

I am a condensed matter theorist. So, I know next to nothing about gravity or cosmology. But, this week, I attended a few Cosmology seminars and hence was motivated to write this post, which is intended to be a layman’s answer to the following question: WTF is dark matter and dark energy? I have been meaning to do this for a while, just because of all the press these things get, for example this old article on dark energy in NYT that was featured along with the cosmologist involved on David Letterman. There is even a movie by this name. This Friday evening is the time to get it off my chest! [1]

Alright. First let us begin with dark matter in this post, which in many ways is the simpler issue. There is all kinds of evidence that all the matter we see in the universe is not all there is. What is some of this “evidence”? Let me try and give you a couple of examples. One of them is associated with “galactic gravitational potentials”. What does that mean? Now, suppose I was observing a galaxy in my telescope. I saw a star that was far from the central bright core of the galaxy. Then, I expect that the star will have a velocity (GM/R)^1/2, where R the distance of the star from the center and M is the mass of the bright stuff in the middle [2]. Now I measure the velocity of this star, it is moving much faster than this estimate. You might say “Aha! You are just underestimating the mass of the bright object in the middle!” But, if you believe that the universe is homogeneous (same everywhere) and isotropic (same in every direction you look), you have no choice but to conclude that there is just some universal parameter that you have to fit observed data, called the “Mass to light” ratio (and you have no choice but to believe this hypothesis unless you want to also believe that the earth is the center of the universe somehow). And I urge you to go and play with this applet to see that you CANNOT fit the observed curve with just one such parameter. So, there must be something else. What that something else could be is some mass that I cannot see, that I don’t know anything about so far, such that the star that I think is far away from most of the mass in the galaxy is not so far at all, for this stuff I cannot see is filling the intermediate space that appears to me to be empty. So, the postulation of the existence of this “dark matter” is one possible explanation for these weird velocities of apparently far-flung stars in galaxies.

One more piece of evidence is associated with the mass of clusters of galaxies. This is rather involved, but if you want, you can go read about it in this post on Cosmic Variance [3]. So, let me move on to another piece of evidence. This is associated with large scale structure in the universe (this is just jargon for stars, galaxies, you and me). The way this argument works is as follows. We know how the universe is today. We use our telescopes, optical or otherwise and know the mass density in the universe everywhere. We also know what the mass density in the universe was when it was only 400000 years old (that is very young on cosmological time scales). This info comes from the cosmic microwave background [4]. Then knowing the mass in the universe, and knowing the laws of gravity, I should be able to go from the scenario 400000 years ago to now. But, I cannot. It turns out that if I try to so this, I get a mass inhomogeneity much smaller than what we have today, to the extent that you and I cannot be here. But we are here. So, one possible explanation could be that there is this “dark matter” we invented earlier is there in the early universe and the information about its distribution is not in the cosmic microwave background and hence we are not able to get to the present structure of the universe and the fact that you and me are here.

Do you see? The postulation of this “dark matter” solves many problems that are around in astrophysics and cosmology. But, the problem is that we don’t know yet what this “dark matter” made of and how it talks to the regular matter that you and I are made of. We have ideas as theorists and we have experimentalist out there testing to see if any of these ideas hold water. But until then, we just have to live with “dark matter”! [5]

[1] There are a whole bunch of erudite articles on the web, for example, this one by Sean Carroll. I will try to be very minimal here, no way near as erudite.

[2] You can do an itty-bitty circular motion calculation to see this, given that gravity leads to acceleration GM/(R^2) on a particle at a distance R.

[3] There is also an interesting comment thread here that is a back and forth on dark matter, that we (readers and writers) at scientific curiosity will do well to emulate! :)

[4] This cosmic microwave background comes from an event in the past of our universe that is called decoupling. But I have to do a lot more work to get this point across. I will provide an update with some appropriate reference subsequently.

[5] But this, namely the postulation of the existence of dark matter is not the only way out of the many problems that cosmologists face. But the other ways out are deferred to a subsequent post coming up shortly, so stay tuned.

Thursday, August 09, 2007

Classification of Protein Structure

Structures of proteins: In all modern day organisms, proteins play a wide variety of roles in the cell. At the molecular level, they are responsible for performing all the mechanical work done by your muscles (as known until now) in addition to the chemical catalysis that they perform on nearly all biochemical reactions inside the cell [1]. The immediate question that begs to be answered "How are these complex molecules able to perform this work? How are they so specific in what they do and specific to the reactions that they catalyze?" While the answer to these questions are nontrivial and the subject of research of more than half the biophysics labs world wide, the common theory going around in the scientific world is that the function of a protein is determined by its structure [2]. This statement is only partially accurate and the reasoning behind this statement is that a protein functions because it is able to have a certain 3-dimensional configuration of certain atoms or functional groups in the amino acids that make up the active site of the protein and these functional groups are then able to catalyze the reaction. The specificity of the reaction they catalyze comes from the specific 3-dimensional configuration of these functional group that it is able to catalyze only when the substrate is able to interact with it in a certain manner. While it is true that the structure does determine how a protein will go about performing its function, these structures are static pictures of the molecule which is otherwise in motion [3]. In addition to the global motion of the molecule, there are relative motions of the atoms that make up the protein which leads to slightly different configurations of the important functional groups and hence they are neither completely specific nor is the function completely dependent on its structure alone.

Classification of Protein Structure: The average protein consists of about 20000-30000 atoms and in order to make sense of the structure of the protein, it is necessary to simplify the protein structure. There are more than 30000 structures in the protein database [4] and to go about looking at each structure would be horrendous. Hence, one needs to come up with a classification scheme for protein structure.

One way of simplifying it is to break it into parts called the secondary structure of the protein. There are various courses/books [1,2] to explain the secondary structure, but for our discussion, it is sufficient to know that certain configurations called alpha helices and beta sheets are common structural motifs found in nearly all proteins. While these secondary structures do help in understanding the local structure of the proteins, they give very little insight about the chemistry that the protein is able to perform and about it's active site itself.

A second and more meaningful attempt at classification of protein structure would be to find certain common structural motifs that can exist independently and classify the proteins based on these structural motifs. For example, a protein can be multifunctional but each function can be carried out independently by different parts of the protein even if you split them up. It could make sense that one can split these multifunctional proteins up based on what function they perform and if you can find the same structure/function motif in different proteins, club them together as a single group. In a protein, the part of the protein that can maintain its structure and function independently is called a domain [5]. Quite often domains of one shape combine with domains of very different shapes to form quite different proteins (very much like building blocks can come together in various different configurations giving walls of various different shapes) [6].

To give meaning to the classification scheme, it would also help to know which proteins perform related function (for example, perform the same reaction on different substrates) or have active sites in the same region of the protein structure. In order to give meaning to this classification, it is better to form groups of more closely related structures that perform similarly. Great minds have always argued that evolution should be the guiding principle while studying biology and it does make sense to classify proteins which have a common evolutionary origin from those that have achieved the same structure independently (also called convergent evolution).

There are various different databases that divide proteins into individual domains and divide these domains up into evolutionarily related groups heirarchically. These databases include the SCOP (Structural Classification Of Proteins) [7](manually divided), CATH (Class,Architecture, Topology, and Homologous superfamily) [8], and FSSP (Families of Structuraly Similar Proteins) databases [9](automatically performed). However, these databases are often flawed and corrections to these databases are often suggested in literature. Part of the problem is it is very difficult to say when similarity in structures occured due to homology (evolutionarily related), or convergence (evolutionarily independent origins). The trivial relationships are those that are apparent in the sequences of the two proteins. When two proteins have very similar sequences (measured by the number of times they have the same amino acid or a slightly related amino acid in the same position of the structure), they are related and statistics based on extreme value distributions can be used to find the probability of both proteins having a common origin [11]. However, the structure remains conserved (does not vary much) much more than sequences and below a certain sequence identity, it is very difficult to prove that there is a relationship between the two proteins without a structure [10].

Other problems that come up are related to the process by which structures are obtained (X-ray crystallography or NMR spectroscopy). These methods are inherently noisy because of various problems such as Heisenberg's uncertainty principle onto the crystallization conditions and the substrates that interact with the protein. So there is never a completely correct structural alignment (that is finding one to one which residues in the structure overlap each other) that also causes minor problems in the classification procedure.

But the most important problem is the level at which to classify structures. While domains are the most commonly used level of classification (because a domain is basically independent), during the evolution process, domains might not have been the basic level at which proteins were constructed. Rather subdomain level small structural units called structural words [12] or foldons [13] (because they could be independent folding units) could also be the smallest level of proteins that had evolved from the RNA world. The theory is that these foldons could come together and form various different domains and then evolved further to form proteins with different functions.

References:
[1] - Biochemistry by Stryer.
[2] - Introduction to Protein Structure by Branden and Tooze.
[3] - A perspective on enzyme catalysis by Stephen Bankovic and Sharon Hammes-Schiffer
[4] - RCSB protein database.
[5] - Domains.
[6] - Multi-domain protein families and domain pairs: comparison with known structures and a random model of domain recombination by Gordana Apic, Wolfgang Huber & Sarah A. Teichmann.
[7] - SCOP.
[8] - CATH.
[9] - FSSP.
[10] - How far divergent evolution goes in proteins - Murzin.
[11] - Maximum Likelihood Fitting of Extreme Value Distributions - Eddy..
[12] - On the evolution of protein folds - Lupas, Ponting, and Russell.
[13] - Foldons, Protein Structural Modules, and Exons by Anna Panchenko, Z. Luthey-Schulten, and P.G. Wolynes.

Wednesday, August 08, 2007

About Rainbows

In this post, what I would like to do is illustrate scientific methodology and scientific curiosity in the context of the simple natural phenomenon called rainbows that we are all familiar with. The choice of this system is only because we all think we understand it and the physics involved is simple ray optics that we all learnt in school at some point (and of course it is pretty as in the picture along side). Now, the first step in scientific methodology is the collection and categorization of facts that we want an explanation for. In the context of rainbows, I want to be able to explain the following facts that I have established by watching rainbows in the sky. Apart from the obvious one about the colors, they are

1. Rainbows are seen when there is sun and rain (incipient or actual). That is why it is called a rainbow.

2. When I stand facing the rainbow, the sun is always behind me. I never see a rainbow on the same side of the sky as the sun.

3. The rainbow is a bow.

The next step is to look into my knowledge bank from the past and see what I already know that would be useful for me to explain the above facts. And I have to do this piece by piece. Now, I remember something about seeing dispersion, the breaking up of white light into its constituent colors when the light moves from one medium to another. Water glass held appropriately in bright sunlight, prisms I played with when I was young and so on. Yes? So, I begin my quest to understand a rainbow by quantifying this vague notion in my head [1].

Willebrord Snellius and the one and only Rene Descartes figured this out for us 400 years ago. They found that if a monochromatic (just jargon for one-color) light ray is incident on the interface between two media (say air and water), then light is refracted (jargon for “bent”) so that if the angle that the incoming ray makes with the interface is ui , then the outgoing ray comes out at an angle [4] ur = sin-1((n1/n2)sin(ui)), where n1 and n2 are properties of the two media in question called the refractive index of the material (again it is just a name, I could have called the property Karthik or Pradeep, but for the sake of conformity I call it by the name already given to it). Don’t worry about the formula if it looks complicated to you. Think of it as follows. If someone told you that they shined light at the interface of two media of given refractive indices, you can just tap some keys on your calculator and know where to put your eye or your camera so that you can see the refracted ray. So much for that. But how does this explain dispersion? The key is that the properties n1 and n2 depend not only on what the medium in question is (i.e., water, air glass etc) but also on the color of the light in question. Different colors will have different values of n1/n2. So even if they all come in at the same angle ui as in the case of sunlight, they will come out at different angles and hence I will be able to see all the different colors. So that is why I am able to see different colors in a rainbow, because there is air and water involved. As an aside also note that the above paragraph tells us that the fact that a straw in a water glass looks bent and the colors of the rainbow come from the same underlying physical equation! Cool isn’t it? This is another aspect of scientific methodology, i.e., link together as many apparently disparate facts as possible as arising from one underlying phenomenon.

Wait a minute, this cannot be right. What I said above cannot be the whole truth. Why is that? It is because of fact 2 above. The sun is on the opposite side of the rainbow. So I cannot possibly be seeing bent light, I must be seeing reflected light that bounced off something. So, what did I miss? What I missed is hidden in that messy formula in the previous paragraph. Recall that sine function takes values from -1 to 1. So if n1/n2 is bigger than 1, that equation can never be satisfied for all values of the angle of incidence. What is wrong here? Clearly I can shine light at whatever angle I wish, so placing a restriction on ui makes no sense. So, ask again, what did we do wrong? What we did wrong was to assume that there is always a refracted ray, i.e., a ray that goes into medium 2. What the “impossible to satisfy” equation above tells us is that beyond a particular angle all the light will be reflected back into medium 1 if n1/n2 is bigger than1. This phenomenon is called “total internal reflection”. If medium 1 is water and medium 2 is air in the earlier picture, then n1is bigger than n2 and light incident at large angles will be reflected back into the water. So, in the context of the rainbow what is happening is along the lines of the figure shown below. The light from the sun enters the raindrop, gets refracted at the front edge of the drop, travels through the drop, gets internally reflected at the back edge of the drop (i.e., the back edge of the drop is acting like a mirror) and then comes back out of the front edge again. And this is the light that you and I on earth see as the rainbow. So we have established that we need refraction and total internal reflection to account for the colors and the fact that the rainbow is on the opposite side to the sun with respect to the observer (jargon for you and me).

Still with me? Just hang on for a little bit more. We only have one fact remaining that we have yet to explain, the fact that the rainbow is indeed a bow. Again the answer lies in the discussion earlier. We just have to tease it out. Let us do this by first noting that the picture in the previous paragraph is clearly an oversimplification. What is really happening is more like this picture below. Light rays from the sun hit the drop and they are reflected and refracted at each interface. And you are standing in such a position that you get only one of a total of four outgoing rays from the drop. So the amount of light that is reaching you is a pretty small fraction of the light that fell on the drop. That is why we made such a big deal about the total internal reflection thing earlier, for it cuts out one of the outgoing rays and increases the intensity (brightness) of the one we get to see. Secondly the reflected light is diffuse. What does this mean? The sun is far enough away that all the light coming from the sun can be thought of as parallel rays. If the interface at hand was flat, then all the reflected/refracted rays will be in the same direction, yes? (Just generalize the picture in the first part of the discussion to many rays to see this). But our interface is a spherical water drop. So, even though the incoming light is all in the same direction, the outgoing light is going to be all over the place. And my eye is a pretty small hole in the scheme of things and I am only going to get a ray or so of the reflected light, not enough to see anything [2]. But I do see the rainbow. How?

This part is slightly more messy to state so bear with me. Let us revisit the picture in the paragraph on total internal reflection for a moment. Since the sun’s light is all parallel, the angle of incidence is going to change depending on where in the sphere the light hits. The angle at which the light comes out to the observer uf depends on the angle of incidence as uf =ui -ud where ud is called the angle of deviation (just another name). Now, clearly, by repeated application of Snell’s law, I can express this angle of deviation as a function of the angle of incidence right? The details are unimportant for us. So let us just say ud = f(ui) for some known f. In order that I see as much light as possible, I need that uf change as little as possible when ui changes, yes? Which is of course the same as saying ud or f must change as little as possible. Now, I remember from some calculus class I took ages ago that a function is “stationary”, i.e., changes as little as possible near the points at which it takes its minimum or maximum value. Do you remember this as well? So, I am most likely to see enough light to make out my rainbow when ud is a minimum (you can easily convince yourself that you have to be on the moon or something to see the region when ud is a maximum). For a drop of rain water and for red light this turns out to be a position such that your eye is located at an angle of 42 degrees to the direction of sunlight (look at the picture to see what I mean). Now, I clearly cannot change where the sun is or where the water is. So I just see all those water drops that make this angle with my eye. And viola! It is a bow!

Phew! We are done. We succeeded in explaining all the things we set out to explain. But just to throw a wrench in the works let me point out why you should not be happy yet. I can think of a 100 reasons but let me state the first couple that come to my mind. In all of the above, I thought of light as a straight line (ray optics). But I remember somebody telling me light is made of photons, little blobs of energy. I even remember learning that light is a wave just like the wave I can make in a string by oscillating it. WTF? How is it a straight line, a blob and a wave? On a totally different front (a front on which I don’t know the answer), I “know” that I see VIBGYOR when I see a rainbow. Hey! But white light is a “continuous” mixture of wavelengths (colors). So what this VIBGYOR business must be telling me is the degree of resolution in the cones of my retina? On the same note, what is it in the processing of images in my brain that leads me to see rainbows around light bulbs when I am drunk or sleepy but not otherwise? That is scientific curiosity for you and there is more than enough stimulus for it from the world around us to keep me occupied for the rest of my days![3]

[1] You can do the simplest of things, go to wikipedia and read this and this.

[2] It is over and beyond my patience levels to make a picture illustrating this. So I recommend that you go and play with this Java applet to see for yourself that this is true.

[3] Apologies on the length of this post. I “cross my heart and hope to die” when I say my future posts will be way shorter!

[4] After I uploaded everything in blogger I see that it has made all my theta's into u's. So the u in the text corresponds to theta in the images.

Tuesday, July 03, 2007

Scientific Communications in Web 2.0 Context

This is a slightly out of context post that covers, instead of a particular aspect of science, some recent developments that may change the paradigms in scientific communications.

Despite the stereotype of lab-coat wearing geeks buried in their work with little connection to the outside world, communication is an extremely important aspect of a scientist’s job. Modern scientific research cannot be conducted in isolation. Hence scientists need to effectively disseminate information, whether presenting data in the informal settings of a lab meeting, or in more formal talks or posters at seminars and scientific conferences. Additionally, there is the matter of publishing scientific findings in technical journals and convincing peers about the importance their work while applying for grants. In a broader scope, scientists also need to spread knowledge to the general lay audience (especially in the current atmosphere of countries like the US, where scientists are being broadly discredited through active political agendas).

Traditionally, the World Wide Web has been employed by scientists as a tool to read/respond to e-mails, search and read journal articles (the old practice of going to the library to access paper copies of journals is all but obsolete), and search information on products, procedures etc (not to mention keeping bench scientists occupied while they wait for reactions to incubate or gels to complete their run). However, the role of the internet in science communication is rapidly expanding. Advent of the hyper-networked platform of the so-called Web 2.0 has particularly opened up excellent opportunities for scientists to both reach out to wider audiences as well as improve communication within their own community.

A major advance has been with respect to communication of science to wider audiences through mediums like blogs (this blog itself is a humble attempt in that direction). Previously, only a select group of science writers and a small number of publications could reach out to this audience. But given the ease of setting up and maintaining a blog and its potential reach, scientists now have an unprecedented access to audiences to talk about technical aspects as well as science policy, future etc. A good example is the wide assortment of blogs hosted under the banner of Scienceblogs, with a majority written by active science researchers.

On the technical side, two exciting portals that could possibly revolutionize scientific communications have come online in recent times. Late last year, the Public Library of Science (PLoS), a non-profit organization championing ‘open-access’ in science publishing, began a web-based journal called PLoS One. Other than being openly accessible to anyone with an internet connection (as opposed to ones that require paid subscriptions), this online-journal is distinguished by its criterion for acceptance: the peer-review process only considers the technical and methodological soundness of the scientific experiments, and accepts paper without any subjective considerations for perceived importance or relevance of the work.

While PLoS One accepts completed manuscripts, the highly reputed journal Nature recently launched a site called the Nature Precedings where scientists can submit pre-publication data and ideas in the form of ‘presentations, posters, white papers, technical papers, supplementary findings, and manuscripts’. Precedings does not have any peer-review system other than a check for completeness and scientific relevance (ie to make sure no non- or pseudo- scientific materials are being posted). Also, while PLoS One accepts manuscripts related to any ‘science or medicine’, Precedings is restricted to ‘biology, medicine (except clinical trials), chemistry and the earth sciences’.

Both Precedings and PLoS One submissions are assigned a unique number called the Digital Object Identifier (DOI) which enables other researchers to cite the articles in their own communications. Additionally, in both cases, the authors retain copyright of the articles through a Creative Commons License. Both sites also have Web 2.0 features such as RSS feeds and tags enabled.

But perhaps the most exciting feature on both Precedings and PLoS One is the ability of readers to comment on the published papers or posts, the idea being that science should be interactive and the connectivity of the web should enable researchers to participate actively in discussions with a broad audience. Additionally, the ability to vote on papers and submissions provides an alternative form of peer-review (a scientific equivalent of Digg?)

Therefore, unlike publishing in traditional journals, a process that takes a few months to complete, or presenting at a conference, only a few which are held and typically with a restricted audience, these portals will allow rapid dissemination of information to a large geographically unrestricted group of scholars. In some ways it is like presenting your data at a big conference, without the actual travel. Potentially, a huge beneficiary could be science in economically poorer countries (or even scientists with sparse budgets in developed countries), where researchers do not have the resource or funding necessary to attend many high quality conferences.

Another benefit of scientists widely using these services is the potential reduction of research redundancy. Especially in the interdisciplinary scenario of today, there are often two or more research groups employing similar methods for a single purpose. While competition is good in some cases, in this day and age of restricted budget for science, it is perhaps better to collaborate then compete.

However, the major concern for the success of such initiatives is whether enough scientific researchers will participate in submitting, commenting and engaging in a meaningful discussion. Old mindsets are difficult to change; currently, scientific scholarship is judged by the number of publications, but more so by the quality of the journals published in, as decided by their Impact Factor. Therefore many researchers would prefer to publish in traditional, arguably more prestigious journals. Moreover, in case of Precedings, it is possible that many laboratories around the world will be wary about releasing novel findings or new ideas for the fear of being scooped by others. Secondly, there is the concern about participation in the discussions. For example, while there are significant number of papers published in PLoS One, very few are commented on, leave aside carrying out an active discussion [1]. Nature’s previous attempt at an ‘open’ peer-reviews system was a failure of sorts as well. Some scientists may even view such activities as time-wasting diversions from real work. Another criticism, mainly for PLoS One, is the fact that the fees for publishing are rather high – 1250 US dollars, which might be too steep for scientists with low research budget.

Still, one can hope that with time, scientists will come to embrace the use of online resources for rapid sharing and discussion of their research. In the world of physics and mathematical research, the Cornell University maintained pre-publication portal, ArXiv, has achieved this goal with great success. It is time for all branches of science, especially the ever expanding biomedical sciences, to welcome the concept. Publishing or pre-publishing at sites like PLoS One or Preceding and obtaining high votes or encouraging active discussions should be looked upon as meaningful scholarly achievements. One can also hope for further engagement of internet technologies in science, e.g. laboratories using a Wiki-like platform to update their results, experimental protocols etc. Fittingly, I will cite this presentation posted on Precedings on how such a communication scheme will look like.

--------------------------------------------------------


[1]: PLoS has recently engaged the services of an Online Community Manager to encourage commenting. The job is held incidentally by a very active science blogger, who got the job in a very Web 2.0 manner, with the initial contact occurring through his blog !

Thursday, June 28, 2007

Could life have started with Simplicity?

One of the perplexing questions people ask in the origin of Life is how did such complexity ever evolve from a simple broth of chemicals in the prebiotic world. The first person to ever attempt to try to answer it was Harold Urey and Stanley Miller who created a chemical soup of ammonia (reduced Nitrogen), methane (reduced C), and hydrogen (should be present in a reduced atmosphere) and subjected the soup to electric discharge (simulating lightning and solar radiation). This experiment was performed in the 1950s and was done to simulate early Earth condition. After this electric discharge passed through the soup, simple amino acids and sugars and the raw materials for nucleic acid bases such as adenine were found to be created in this mixture [1]. These are all the raw ingredients for biochemistry to start hence bringing evolution of the origin of life into the realm of experimental science for the first time. Even though, the conditions of early Earth have come into question since then, Urey and Miller deservedly received a Nobel prize for the novel aspect of their work. In fact, the experiments were repeated recently with nitrogen gas instead of ammonia, carbon dioxide instead of methane, and hydrogen or water (currently accepted conditions for early Earth), and the products from the broth were similar in nature to those found in the Urey-Miller experiment.

In the prebiotic world envisioned by most scientists, chemistry would have dominated the changing scenario and landscape found in Earth. Chemistry, unlike biochemistry, is very non-specific and would create a huge pool of chemicals. Under the assumption that there were signs of modern cellular organisms in that pool (and this is a big assumption made out of necessity), then all or most of the biochemical reactions would be a small subset of all the reactions occurring in this pool called protometabolism [2]. Somehow, after the first catalyst were formed (not as efficient as modern enzymes), those catalysts were more specific towards a subset of these reactions and made these reactions occur at a faster rate leading to a feedback mechanism by which these reactions became the dominant reactions leading to the biochemicals or life as we know it now.

One such theory of the origin of life states that an autocatalytic reaction cycle was present in the chemical gemisch in the prebiotic world and by the nature of it being autocatalytic, it started dominating this prebiotic world leading to the first signs of life [3-6]. One such autocatalytic cycle is the tricarboxylic acid cycle (TCA or Krebs or Calvin cycle), which is present in all modern organisms in one form or the other [7]. The TCA cycle is the only route of carbon fixation into biochemicals starting with carbon dioxide as the source of carbon [8,9]. In one form of the cycle, called the reverse TCA cycle (and found in few organisms), the overall reaction can be visualized as 2 molecules of carbon dioxide (found in prebiotic earth) reacting with a molecule of citrate and 6 molecules of hydrogen to form 2 molecules of citrate and 5 molecules of water. The important thing to note is that 2 molecules of citrate were formed from 1 molecule of citrate hence producing more of the reactant. In other words, 2 molecules of citrate can be used as reactant in the next round of the TCA and the cycle is hence called autocatalytic. As it is autocatalytic, once prebiotic conditions existed where this cycle could take place completely (all reactions in it have to take place), this cycle would have taken place much faster after some time and would have slowly dominated the early prebiotic metabolism.

In addition, in modern cells, the TCA or the rTCA cycle is at the center of a cell's metabolism. In other words, the intermediates of the TCA cycle form amino acids, nucleotides, and cofactors for the rest of the cellular machinery. So, after this cycle starts to dominate the prebiotic world, the side reactions would start producing amino acids and nucleotides leading to complexity required for biochemistry to begin [8]. However, the conditions required for this cycle to take place completely have not been found so far. Secondly, the source of energy of these reactions and the compartmentalization of these reactions (to cause insignificantly higher concentration of these biochemicals) is still a matter of speculation and further research.

It was postulated that in early prebiotic conditions, these reactions could have taken place on clay or on metal sulfide surfaces such as FeS. These metals would have themselves been oxidized to ferric sulfide providing energy to take place to completion [3,4]. Another theory is that it may not have been just the TCA cycle but some other cycle like the ribose cycle that could have been at the origin of metabolism [5]. The advantage of the ribose cycle is that unlike the TCA cycle, there are only 1 or 2 reactions in the cycle that do not take place at an appreciable rate without a catalyst and hence only 1 or 2 reactions need the clay or metal surface as a catalyst.

In either case, it is a question whether an autocatalytic cycle should be considered as life. In my opinion it should not, even though it is producing more of itself (chemical form of reproduction) at the end of the day and there is energy conversion in the cycle (metabolism). It is just that life is very specific and driven unlike early chemistry which would have been highly aspecific. But this is certainly a matter of speculation and discussion.

[1] Biochemistry - Stryer.
[2] Singularities - de Duve.
[3] Wechterheuser - Evolution of the first metabolic cycles - PNAS, 87:200-204, 1990.
[4] Wechterheuser - On the chemistry and evolution of the pioneer organism - Chemistry and Biodiversity, 4:584-602, 2007.
[5] Orgel - Self-organizing biochemical cycles - PNAS, 97:12503-12507, 2000.
[6] Smith and Morowitz - Universality in intermediary metabolism - PNAS, 101:13168-13173, 2004.
[7] Wikipedia entry on Citric acid cycle.
[8] Morowitz, Kostelnik, Yang, and Cody - The origin of intermediary metabolism - PNAS, 97:7704-7708, 2000.
[9] Srinivasan and Morowitz - Ancient genes in contemporary persistent microbial pathogens - Biol. Bull., 210:1-9, 2006.

PS: Stanley Miller passed away this year at the age of 77 and this post is dedicated to him.

Tuesday, May 29, 2007

Resolving the panorama

This post is about image stitching methods used to make a panoramic image. Panoramic images have become important in the digital age. Initially, panoramic images were developed to increase the field of view on the photograph. In the digital age, because one cannot print out pictures with resolutions less than 200 dots per inch (explained here), the method to take print outs for posters is to take a number of photographs with at least 15% overlap and stitch them together later using some software. In order to take the individual pictures that make a panoramic picture, the best technique involves using a tripod so that the camera lens only moves on a sphere eliminating parallax error. In addition, the aperture and shutter speed should not vary between the various pictures. More tips on the techniques of panoramic pictures can be found all over Google or by sending an email to me. This post is more about the science behind stitching the images of a panoramic picture.

The idea of image stitching is to take multiple images and to make a single image from them with an invisible seam and such that the mosaic pictures remains true to the individual images (in other words, does not change the lighting effects too much). This is different from just placing the images side by side because there will be differences in the lighting between the 2 images and that would lead to a prominent seam in the mosaic picture.



This Figure shows 3 photos and the locations of the seams are shown in black boxes on each picture and the final mosaic formed from all three pictures.

The first step is to find points that are equivalent in 2 overlapping pictures [1]. This can be done by taking into consideration a certain amount of pixels in the neighborhood of a pixel from 2 pictures and finding the regions that overlap in colors between the 2 pictures. Then the images are placed or warped on a surface such as a cylinder (because the panoramic picture is a 2-dimensional representation of the overlapping pictures in a cylinder quite often). After this step the curve is found that gives the most amount of overlap between the equivalent pixels on both images. Then the images are stitched together with color correction. I will deal in this post with the various algorithms for color correction.



This figure is an example of the Feathering approach.


1. Feathering (Alpha Blending): In this method, at the seams (the regions of overlap), the pixels of the blended image are given colors that are effectively linear combinations of the pixel colors of the 1st image and the 2nd image. The effect is to blur the differences of both images at the edges. In this method, an optimal window size is found so that the blurring is least visible.




This figure shows the optimal blend between the 2 figures in the previous figure.


2. Pyramid Blending: In addition to the pixel representation of images, images can also be stored as pyramids. This is a data compression method in which a the image is stored as a hierarchy or pyramid of low-pass filtered versions of the original image so that successive levels correspond to lower frequencies (dividing the images into different layers that vary over a smaller or larger region of space so that the sum of it gives you the original image). During the blending method described above, the lower frequencies (which vary over a larger distance) are blended over spatially larger distance and the higher frequencies are blended over a spatially lower distance [1] causing a more realistic blended image to be formed. Here during the pyramid forming process, the 2nd derivatives of the images (Laplacian) are taken into consideration while forming the pyramid and the blended pyramid is formed and reintegrated to form the final blended image.



This figure shows the pyramid representation of the pixels in an image and the pyramid blending approach.

3. Gradient Domain Blending: Instead of making a low resolution mapping of the image as above, the gradient domain blending method requires the calculation of the 1st derivative of the images. Hence, the image resolution is not reduced before the blending process, but the idea is the same as above. This method is also developed to find the optimal window size for alpha blending and is adaptive to regions that vary fast or slower.


This figure shows the gradient blend approach.

Sources:
[1]: http://www.cs.huji.ac.il/course/2005/impr/lectures2005/Tirgul10_BW.pdf
[2]:
http://rfv.insa-lyon.fr/~jolion/IP2000/report/node78.html

Wikipedia article on feathering.

All figures taken from http://www.cs.huji.ac.il/course/2005/impr/lectures2005/Tirgul10_BW.pdf

Thursday, May 24, 2007

Biological Control - Doing it yourself.

It was not too long back that the whole of biology was very protein and DNA centric. The reasoning was that proteins were used to do all the work in the cell - be it chemical work (enzymes), or physical work (motors, and pumps). DNA was important because it provided all the information to make the proteins and contain the set of genetic instructions that are passed on from generation to generation. For long, there was a battle whether DNA was more important or proteins were more important neglecting DNA's chemical cousin RNA.

RNA was considered as a step required in modern organisms to convert DNA to proteins. RNA is made up of nearly the same chemical constituents as DNA but it is more flexible and can have wide ranging 3 dimensional structures unlike DNA's double helical structure. However, this increased flexibility comes at a price - RNA is more unstable and in modern cells, a single molecule of RNA does not remain functional for long periods of time (mean life time is approx 5 minutes in E.coli).

Of course, all this changed when it was found that RNA molecules could be used as catalysts and even in modern day cells, there are some RNA catalysts also called ribozymes (and the list of ribozymes discovered keeps increasing). RNA captivated the imagination of biologists as this was a molecule that could store genetic information as well as be used as catalysts - taking on the dual role of enzymes and information storage. All of a sudden, RNA was considered to be at the origin of life as we know it. However in the RNA world hypothesis, one should take into consideration that it is not that only RNA is present. It only postulates that RNA is present and is dominant but other biochemicals such as peptides (small proteins) and DNA oligomers (small DNA molecules) are also present and aiding life (idea originally proposed in [1]).

One of the biggest controversies against the RNA world hypothesis has been that it does not play that big a role in modern cells. However, it has been found more recently that there are many RNA control elements in the cell. One such control element is the riboswitch. For a gene to be made, the DNA gets converted into a message called the mRNA (messenger RNA) which later gets converted to the protein equivalent to that message. It has increasingly been found that mRNA do not contain only the message to be read but certain control elements could also be present in the mRNA. These control elements are called riboswitch.

Lets take an example. Supposing you want to make Vitamin B1. There is an intermediate in its biochemical pathway called thiamine pyrophosphate (TPP). TPP is also important for nucleotide (the chemical constituent of RNA and DNA) and amino acid (the chemical constituent of proteins) biosynthesis and is important for the cell to have the right amount of TPP channeled into the different biochemical pathways. When too much of TPP is present in the cell, TPP binds to a certain riboswitch in it's own biochemical pathway. This causes the riboswitch [2] to suddenly have a defined 3-dimensional structure (from an earlier random or semi structured RNA element). This defined 3-dimensional structure also blocks the production of the protein for making more TPP. The switch in the mRNA turns the production of the protein that makes TPP on or off depending on whether enough TPP is present in the cell or not - hence regulating the production of TPP itself. So far, riboswitches are found more in the microbial world and are only now being found in the eukaryotic world.

Now, in the latest issue of Nature, the first riboswitch that controls splicing in higher organisms such as fungus has been found [3]. Splicing is the mechanism by which parts of the mRNA are removed before the protein is made so that parts of the DNA never translated in the protein. Alternative splicing is the mechanism by which a single gene at the DNA level can be translated into multiple protein molecules. This is done by excising different parts of the mRNA (excising the DNA only in one situation and not another) before it gets converted to protein. Splicing and alternative splicing occurs only in eukaryotes and has also been discussed here.

Anyways, the first riboswitch in the mRNA have been found to function for alternative splicing purposes. The TPP biochemical pathway discussed above is the system that they found riboswitches in. In this case, when TPP was present, the riboswitch forms a three dimensional structure that avoids splicing and the protein that is formed can not make more TPP. So the objective was again control of TPP concentration in the cell but the means used was alternative splicing instead of just blocking formation of protein. The implications of these results will only come out with time, but there is speculation that this opens up a whole pandora's box on riboswitches that could be found in eukaryotes.

[1] The Genetic Code - Carl Woese, 1968.
[2] Thiamine derivatives bind messenger RNAs directly to regulate bacterial gene expression. Wade Winkler Ali Nahvi & Ronald R. Breaker. Nature 419, 952 - 956 (2002).
[3] Control of alternative RNA splicing and gene expression by eukaryotic riboswitches. Ming T. Cheah, Andreas Wachter, Narasimhan Sudarsan & Ronald R. Breaker. Nature 447:497 (2007) and its companion discussion article - Molecular biology: RNA in control. Benjamin J. Blencowe & May Khanna. 447:391 (2007)

pdf of all cited aritcles avaiable on request

Monday, May 21, 2007

Battle of sexes

Human Beings are diploid – that is each of us contains a copy of chromosome from our Mom and one from Dad. This gives us the advantage of having a spare copy of any given gene. However, there are certain genes that are "marked"in the embryo in such a way that either the Mom's or the Dad's copy is selectively silenced. The end result is that some genes in our body come with instructions attached, I am from Mom, Use only me! or vice- versa. The process that does this is called imprinting – either maternal ( for use only Mom’s gene) or paternal ( for use only Dad’s gene).

Why develop this curious phenomenon? On the surface it seems to be counter productive to us humans. If the marked/imprinted gene is defective then there is no working copy left since the silenced copy form the other parent can never be used. So why evolve such a complex yet dangerous mechanism? Since the process became known to scientists in the early 60s, several hypotheses have been put forth as to why this must occur. One of the most popular one shows the peculiar nature of inherent in a gene – its selfishness.

The Haig hypothesis is simple - it relates the development of a baby to the parent's inherent fidelity. The hypothesis, put forth by David Haig, predicts that Mom and Dad have different interests when in it comes to the development of their baby in any non-monogamous species, and hence imprint genes that are involved in growth of the embryo. Simply put both Mom and Dad fight a genetic war when it comes to the baby, more so if either one of them is prone to promiscuous!

Is there evidence for this prediction?
There is an excellent study done with the “deer mice” Peromyscus. This is a perfect species genus as we have both monogamous and polygamous species that can interbreed namely, P. maniculatus and P. polionotus. The females of the dark brown Peromyscus maniculatus species are promiscuous (babies within a single litter often have different fathers). Peromyscus polionotus, the sandy mouse, however pairs for life.

Check scenario one - Dad screws around but Mom is faithful.
In this case, the dad knows that the chances that all the offspring that she might carry are all his is slim. So he has to think of a way to make sure his baby grows faster at the cost off all other siblings and even mom.

This is exactly what you see when mate the faithful Peromyscus polionotus female with the
P. maniculatus male.The pups obtained are huge and mothers die giving birth.
Reason ? Well, the Peromyscus maniculatus dad has put a copy of a gene that ensures his baby grows faster since the female of his species is promiscuous. But the poor faithful Peromyscus polionotus mom is not used to playing this war and hence has no defense against the signals he is sending in. So the babies, prompted by Dad's genes grow unchecked, use up the mom’s resources and kill mom.


Check Scenario two - Mom is promiscuous but Dad is not.
The mom knows that since all the litter she carries has her genes, she can spread her genes in the population if she can restrict the growth of any one fetus, to conserve resources for her offspring with other males. So the genes she imprints will slow fetal growth.
That is what happens when you mate the promiscous P. maniculatus females with a steadfast Peromyscus polionotus male - you get tiny pups.
What happens? In this case, the mom is using her imprinted copy to slow down growth of the babies but the counterpart signal to grow is never received from the dad. The result is puny babies.

What if both parents are not promiscuous or vice versa?
The offsprings from a P. maniculatus cross or from a Peromyscus polionotus cross are healthy and similar in size. Reason? Each partner has co-evolved the defenses. In case of the promiscuous pair, the dad signals the babies to grow faster and mom to grow slower. In the other pair, each parent has the same vested interest in the offspring. End result is a normal sized litter.

What about humans?
So far about 80 of the 30,000 or so genes in the human genome are currently known to be imprinted. More importantly, most of these genes seem to play a role in directing fetal growth! And in the expected direction if humans were not considered to be monogamous-- genes expressed from the dad’s copy generally increase resource transfer to the child, whereas maternally expressed genes reduce it. So our genes behave much in the same way as the promiscuous mice! However there are imprinted loci are also implicated in behaviour/ neurological cases (Prader -Willi Syndrome) indicating that there is more to understand about this phenomenon.

More support for the theory comes from early indications that of very little imprinting in in fish, amphibians, reptiles and birds. Since the hypothesis states that imprinted genes are linked with acquiring resources from parents, this makes sense. But imprinting does also exist in seed plants where the endosperm tissue acts as the placenta to feed the embryo. Why this is the case is still unclear. There is lot of research that is ongoing and more that needs to be done. As molecular tools improve, we will be dissecting roles of imprinted genes much easily.

Ref:
1. Dawson, W.D. Fertility and size inheritance in a Peromyscus species cross. Evolution 19, 44−55.
2. Vrana Et. al., Genomic imprinting is disrupted in interspecific Peromyscus hybrids, Nature Genetics, 20, 362 - 365

Sunday, May 13, 2007

Global Warming Facts - Part 1

(I'm referring to news articles rather than scientific articles, and avoiding technical discussions in order to keep this article readable to everybody.)

If I told you that the Ganges and the Brahmaputra will both dry up by the year 2035, how hard would you laugh at me? Now, what if it was the world's leading scientific authority on climate change that told you?

I'm sure every one of us knows at least a little bit about global warming: that it is primarily caused by the greenhouse effect, and that greenhouse gas levels in the atmosphere have been rising because of industrialization and deforestation, that rising global temperatures will melt polar ice caps thus causing sea levels to rise, and so on. However, until recently, we've all been led to believe that we have a century or two to cut greenhouse emissions and quell the problem. The key phrase there is "until recently", because climate science has now progressed enough to tell us how bad the situation really is.

How bad will India be hit?
The first sentence of this article must have sent alarm bells ringing in your head. But a little thought will tell you why the Ganges will dry up, if not when: the Ganges, and indeed all perennial rivers in North India, are fed by glaciers in the Himalayas. As global temperatures rise, the glaciers receive snow later and start melting earlier, causing them to gradually fall back to the colder regions. This news article [1] in the Hindu has a detailed discussion about the effect of global warming on glaciers. The world's leading authority on climate change, the Intergovernmental Panel on Climate Change (IPCC), believes that all North Indian rivers will turn seasonal, and ultimately dry up by the year 2035 itself if global warming remains unchecked.

But there's more. Another news article [2] confirms our worst fears: inundation of low-lying areas along the coastline owing to rising sea levels; drastic increase in heat-related deaths; dropping water tables; decreased crop productivity are some of the horrors outlined for us. Falling crop productivity due to the change in the length of the seasons is of particular concern, because there is an acute shortage of arable land in our country. With the population still growing rapidly, and crop productivity dropping, combined with the fact that we are already facing a grain shortage this year and have been forced to procure from abroad, the situation appears dire.

Is it fair? The major contributors to the greenhouse effect thus far are the developed nations, and even on an absolute basis (let us not even go into a per-capita basis), India's contribution to global warming is very little. And yet, we will be among the first to suffer its effects, as the change in climate will decrease crop productivity near the equator but actually increase it in the temperate regions. Effectively, the third world has been offered a very raw deal: suffer for something you didn't do, and still bear the yoke of cutting emissions because, frankly, at this point our planet needs all the help it can get.

How high is safe?
Let us leave India's concerns aside for now, take a step back and look at the global picture. Global temperatures have risen about 0.6 C on an average in the past century. There is a worldwide consensus among scientific circles that the adverse effects of global warming will probably be manageable for a rise in temperature upto 2 C, but beyond that, melting ice caps, unbalanced ecosystems, drastically reduced crop yields, etc. will cause worldwide disaster of monstrous proportions. If I haven't painted the picture clearly enough for you, read this article [3] and this article [4] detailing exactly what countries like Canada and Australia can expect in terms of "disaster".

But, is this where you heave a sigh and think, if it takes a century for the temperature to rise 0.6 C, then we have plenty of time to remedy the situation before the rise reaches 2 C? Wrong. You see, there is a lag between the rise in greenhouse gases and the rise in global temperatures. Scientists give the analogy of heating a metal plate directly, and then indirectly, by placing a metal block between the plate and the heat source: when you place the block, it takes some time before an increase in temperature at the heat source affects the plate; at the same time, if the heat source stabilizes or drops in temperature, the plate will continue to increase in temperature for a while before stabilizing or dropping. Thus, the increase in temperature now is a direct effect of rising greenhouse gas levels sometime in the 20th century. We are yet to reap the effect of the carbon dioxide we are currently dumping into the atmosphere! And the fact is, the amount of greenhouse gases that have been going into the atmosphere has been steadily accelerating over the past century.

So, where should we hold greenhouse gas levels in order to hold the global temperature rise to 2 C? The answer cannot be explained in one sentence, because there is some statistics involved. We cannot accurately predict the temperature rise from carbon dioxide levels yet; we have to talk in terms of probabilities. A recent study by Meinshausen et al. [5] gives some startling numbers. This is actually explained in much simpler terms in this press article [6]. The gist of it is that, we are already past the safe limit! You see, the current level of greenhouse gases in the atmosphere stands at 459 ppm of carbon dioxide equivalent (the actual concentration of CO2, corrected to include the effect of other greenhouse gases). According to the Meinshausen study, if atmospheric greenhouse concentrations are maintained at 450 ppm, the probability of global temperature rise crossing 2 C reaches unacceptable levels (> 50%). The current EU target is 550 ppm - at that level, we will be looking at a rise of around 3 C! In other words, emissions across the world should already be decreasing, not increasing at an accelerating pace. Countries around the world should be spending a significant percentage of their GDPs to save the planet, but everyone seems reluctant to move.

Panels and Reports
I had mentioned the IPCC earlier. The IPCC was formed by the UN and has actually been around since 1988. Over the years, it has established itself as the world's leading authority on climate change. It publishes its findings periodically, the assessment reports published this year being the fourth set, and the most controversial one because it reads more like a disaster movie script than a scientific report. Actually, there had been protests over the previous report that the IPCC is being alarmist, and the UK government ordered an independent study be made (a committee was appointed, led by Nicholas Stern), and its findings were released at the end of October 2006. The Stern Review actually reported that the IPCC had understated the situation in the third assessment report. You see, climate science is far from exact, and the IPCC tends to err on the conservative side. There are already publications that say that the IPCC has been conservative even in the fourth report - read this news article [7].

Perhaps the most important thing that the fourth assessment report has accomplished is that it has finally laid to rest claims that global warming is a myth. Yes, until a few years ago, there wasn't even a global consensus on whether global warming is the fault of man, because the waters got muddied by studies that showed that greenhouse gases, while absorbing heat radiated by the earth, happened to reflect sunlight coming in, thus reducing temperatures. Further, it is believed that geologically, the world is headed towards an ice age. Increasing global temperatures were attributed to periodic properties of the Sun! Now, at last, all these speculations have been laid to rest, and IPCC has stated that there is a 90% probability that the phenomenon of increasing global temperatures is anthropogenic (caused by man), and primarily because of greenhouse gases - what we've suspected all along. India, too, has finally woken up to the threat, and has set up a panel [Citation needed] to investigate the specific effects of global warming on India over the next few decades, and what remedial measures are feasible. The panel is to be headed by Mr. Pachauri himself, the current head of the IPCC.

To be continued...
In the next part: The Kyoto Protocol, Emissions Trading, Extreme weather events, Bush-bashing, cows, bees and more!

References

[1] The Great Himalayan Meltdown
[2] Climate Change Will Devastate India
[3] Dire consequences if global warming exceeds 2 degrees says IUCN release
[4] Two degrees of separation from disaster
[5] M. Meinshausen "What Does a 2 C Target Mean for Greenhouse Gas Concentrations? A Brief Analysis Based on Multi-Gas Emission Pathways and Several Climate Sensitivity Uncertainty Estimates." in H. Schellnhuber, et al., eds. Avoiding Dangerous Climate Change (Cambridge University Press, New York, 2006)
[6] The rich world's policy on greenhouse gas now seems clear: millions will die
[7] Some scientists protest draft of warming report

Wednesday, May 09, 2007

Attention Concerned Scientists in IN, KY and OH

If you are a scientist in Kentucky, Indiana or Ohio and are concerned about the scientifically inaccurate materials at the Ken Ham's Creationist museum, please sign this.

Statement of Concern
We, the undersigned scientists at universities and colleges in Kentucky,
Ohio, and Indiana, are concerned about scientifically inaccurate materials at the Answers in Genesis museum. Students who accept this material as scientifically valid are unlikely to succeed in science courses at the college level. These students will need remedial instruction in the nature of science, as well as in the specific areas of science misrepresented by Answers in Genesis.

Via Pharyngula