Immunology: Innate Immunity

Why, hello there, everyone! It’s been a while! (I should make a separate category just for “I’m back” posts, since I use this blog so sporadically.) It seems like just yesterday that I was on here crying about my upcoming test in biochemistry, which I’d procrastinated studying for. Ha ha. Ha.

I have another test coming up in biochem. That much hasn’t changed. However, I’m not panicking about that anymore. (You can only take so many biochemistry classes before it just… becomes a part of you, bad joke intended.) Instead, I’m here to wax philosophical about my newest pet subject. No, it’s not transition metal chemistry. No, it’s not plant biology. It’s immunology.

Am I taking immunology? Yes. Is it required for my chemistry coursework? No. Am I insane? Absolutely.

(To be fair, I’m taking it because I’m going to be working on a biochemistry project that overlaps with immunology… and because I have an autoimmune disease that makes me very interested in the subject. But anyway.)

Our test is coming up in a week, so you can expect to see me periodically throwing things at this blog. You know the drill. I apologize in advanced. Block me if I get annoying. Comment if I don’t.

For anyone still with me, get ready for a wild ride, because we’re going to take a high-speed tour of the immune system. First stop, innate immunity!

If I learned one thing from plant biology, it’s that a large part of existing as a living organism is learning how to fight off things that are trying to kill you.

It’s just a fact of existence. All organisms face threats from their environment in some way. Whether you’re a plant like a potato fighting off Phytophthora infestans, a bacterium like Streptococcus pyogenes fighting off a virus, or a mammal like a human fighting off pneumonia, to continue to exist as an organism, you have to be prepared to FITE. There is no pacifism in the game of life—either kill, or be killed.

Because of this, organisms of all shapes and sizes come equipped with machinery meant explicitly to defend the fort from attack. Plants have their competing pathways for biotrophic and necrotrophic pathogens that I talked about when I talked about plant hormonesS. pyogenes has CRISPR, which, aside from aiding a dangerous pathogen in staying alive, is also absurdly useful as a biotechnological tool. (I know I’ve linked it, but I’ll link it again, because it conveniently came on while I was writing this: go listen to “CRISPR-Cas9,” a cover of “Mr. Sandman” by Tim Blais.) And if you happen to be a human organism trying to stay alive in this dangerous world?

You have an army.

No, really, the numbers here are insane. I’ll circle back to the math in a later post. For now, just trust me—your immune system, which is your means of fighting off infection and other dangerous stuff, is ridiculously complicated, and for good reason. (My professor described it as essentially being the last stronghold for human evolution, and it really is.) Because it’s so complicated, we have to talk about it in segments… which is kind of the point of this post.

You can essentially divide your immune system into two parts: adaptive (or acquired) immunity and innate immunity. Adaptive immunity is what you think of when you think of your immune system, probably—the part that encounters a threat, comes after it in cold blood, and then holds a grudge forever (in the form of an immune memory that prevents you from getting sick from something you’ve had before). However, the adaptive immune system, being remarkably specific, is also… very slow. To keep you from getting deathly ill from every virus and bacterium that slips through the cuts in your skin, you need a less specific, but faster response—your innate immune response.

The most basic components of innate immunity are the physical and chemical barriers that prevent things from getting in your body. There are a lot of things that I could talk about here, but really, it’s just things like… stomach acid. Mucus. Your freakening skin. Before a pathogen can even touch the cells of what you might consider your “immune system proper,” it has to make it over the wall and through the minefields that constitute your first line of defense against invaders.

But, okay, let’s say something gets in. Lots of things get in, or we wouldn’t need an immune system at all. What then? If an invader makes it over the wall and through the minefields, what soldiers do we have to call to our aid?

Well, you’ve probably heard of white blood cells, the cells in your blood that are responsible for carrying out more sophisticated tasks than just toting oxygen from point A to point B. (Me to me: Get out with your hemoglobin slander.) They come in many shapes and sizes. Some—B cells and T cells—are part of your adaptive immune response. Others, such as polymorphonuclear (PMN) leukocytes and macrophages, are part of the innate immune response.

Polymorphonuclear leukocytes, also called “granulocytes,” are named because of the way they look—these cells have lobed nuclei and a granular appearance. They contain a heck of a lot of dangerous stuff (in the form of degradative enzymes and reactive oxygen species) that can be used to kill a pathogen, and because their job is basically to seek out pathogens and dump bad stuff on them, they’re relatively short-lived. These cells can be further divided into more specialized subtypes, such as the abundant neutrophils, the parasite-attacking eosinophils, and the histamine-releasing basophils.

Macrophages, which are derived from mononuclear monocytes, are a larger breed of immune cell that patrol the body looking for potential threats. They’re involved in a wide variety of immune functions, including the removal of dying cells and debris, and they’re also able to present antigens to T cells, which makes them important in adaptive immunity. However, they are perhaps most famous for their ability to perform phagocytosis—their ability to “eat” things such as pathogens.

How does a cell go about “eating” anything, and what does that even accomplish? Well, Wikipedia has some nice figures that illustrate how this mess works, but the basic idea is that they engulf large particles—such as bacteria—so that it ends up in a “bubble,” called a “phagosome,” inside the cell. The phagosome then fuses with lysosomes, also inside the cell, to shred the bacterium into little bits.

What is a lysosome? Well, it’s a membrane-bound organelle found inside eukaryotic cells that’s responsible, in blunt terms, for smashing stuff up. My professor refers to them as “the blender and the blowtorch” because they contain two different kinds of “bad stuff”—degradative enzymes that cut things apart (“the blender”), and reactive oxygen species (ROS) that basically do the microscale equivalent of setting things on fire (“the blowtorch”). Unless you’re a particularly hardy little pathogen, once you come in contact with these things, it’s game over.

So, then, it would seem that the key to a strong innate immune response would be the ability to recognize pathogens as they get in, so that they can be subsequently eaten and blended/blowtorched into smithereens. But, as we already mentioned before, the innate immune system doesn’t rely on specific targeting of pathogens using antibodies. How, then, can it tell when something is suspicious, if it doesn’t have a template to go off of?

It turns out that the innate immune response relies on pattern recognition, or the recognition of certain markers, called pathogen-associated molecular patterns (PAMPs), that are evolutionarily conserved in a large number of pathogens that we encounter. These PAMPs are recognized by pattern recognition receptors, or PRRs, that can trigger both an innate immune response and initiate an adaptive immune response.

PRRs come in many different flavors that are tailored for specific markers present in invading pathogens (and, as I’ll explain in a minute, parts of our own cells). Toll-Like Receptors, or TLRs, are one such class of these receptors. Of particular interest are TLR4, TLR5, and TLR9, which each bind to a specific PAMP.

TLR4 binds to lipopolysaccharide, or LPS, which is a component of the outer membrane in Gram-negative bacteria. That makes sense, if you think about it—although your innate immune system has no way of identifying a pathogen specifically, if it sees LPS, it knows that there’s some form of Gram-negative bacteria hanging around.

TLR5 is brilliant in that it employs a similar strategy to recognize motile bacteria by locking onto flagellin, which is a component of bacterial flagella (the “tails” that some bacteria use to move). Again, this makes sense—if your immune system sees flagellin, it really doesn’t care what bacterium it is specifically. It just knows it’s a bacterium, and it’s got to go.

TLR9 is particularly interesting in that it’s activated by what you might consider to be a less straightforward PAMP—DNA. More specifically, it’s activated by unmethylated CpG DNA. If you have some background in biology, this might not surprise you, but it turns out that vertebrate genomes typically contain fewer CpG dinucleotides (C followed by G) than invertebrate genomes, and where they occur in vertebrates, they are often methylated. (Methylation has a lot of interesting consequences, such as increasing rate of mutation and altering gene expression—anyone in this thread like epigenetics?) Therefore, when TLR9 sees unmethylated CpG DNA, it’s pretty safe in assuming that it’s from an invader.

Another interesting PRR—not a TLR—is formyl peptide receptor 1 (FPR1), also called f-Met-Leu-Phe receptor 1. This receptor does about what it sounds like it should—it recognizes the PAMP formyl-methionine (f-Met). f-Met is a modified (formylated) amino acid that’s used to initiate protein transcription, but only in bacteria. Therefore, if it shows up anywhere in your body, it’s an indication that bacteria are on the loose.

… Yeah, okay, this is a blatant lie. The use of f-Met at the initiation of protein translation is very common in bacteria and mitochondria, the powerhouses of your own freakening cells. (Also chloroplasts, which we don’t have.) If you’ve ever heard of the endosymbiotic theory, this won’t surprise you at all, but for everyone else, here’s a fun little tidbit to make you feel a bit like an alien: your mitochondria, which, again, are your cell’s oxygen-consuming power plants and the reason that you can exist as you do, are, in a sense, a particularly clingy form of bacteria that just… decided to take up residence inside a eukaryotic cell long ago. Although they’re an integral component of your human cells, they have a lot of the hallmarks of bacteria, including their own genomes, the ability to reproduce through binary fission, and the use of f-Met in protein transcription.

Ha. Ha ha ha.

A fun consequence of this is that, if you break open a cell and expose mitochondrial proteins to the world at large, FPR1 will recognize those proteins as a sign of a threat, and an immune response will get initiated. According to my professor, that’s the fun (hah) reason why the inflammatory response to infection and injury look so similar. It’s because they’re the same.

And, yeah, speaking of, what even is inflammation? Everyone is familiar with it in some capacity, whether it’s the fleeting inflammation associated with a cut (like the annoying throbbing of my thumb, which got bitten by a mouse about fifteen minutes ago in immunology lab) or the prolonged effects of chronic inflammation (like rheumatoid arthritis). But what causes the typical redness, heat, and pain that are associated with inflammation? You, being tired of me explaining everything in painstaking detail, sigh, “Increased blood flow, obviously.”

Yeah, essentially. Increased blood flow, vasodilation, edema, and elevated cellular metabolism are all factors that contribute to the characteristics associated with inflammation. Although my book goes into a lot of detail, I’m… not going to do that. I’m just going to cut straight to the stuff that I think is interesting.

When you suffer injury of some kind, be it mechanical, chemical, or biological, polymorphonuclear leukocytes (PMNLs) are the first cells on the scene. They show up within thirty minutes, phagocytize things (in the case of neutrophils), and release all of those awful lysosomal enzymes to try to kill anything dangerous. If that first strike doesn’t work, macrophages and lymphocytes show up within 4-6 hours to try to aid in the fight. As you’ll recall, macrophages are going to eat and kill pathogens, too, but they’re also going to present antigens to T cells, which can then go on to stimulate an adaptive immune response, which will take another 5-7 days to kick in. Therefore, your “first-responder” PMNLs show up within minutes, the macrophages show up within hours, and your adaptive immune response—specific and sophisticated as it is—shows up within a week.

I hope this post has made it clear to you how important your innate immunity is in keeping you alive. It’s easy, as humans existing in societies that have soap, antibiotics, and vaccines, to focus only on that part of your immune system with smarts and a good memory. However, before our adaptive immune system can even kick into high gear, we already have an army of immune cells at our disposal, fighting and dying to keep pathogens from overtaking our bodies using hammers, blowtorches, and an evolutionarily-bestowed sense of intuition. If that isn’t freaking amazing, I don’t know what is.

All right. I hope I’ve piqued your interest in immunology, because you’re going to be seeing a lot more of it. Now that we’ve talked about your soldiers on the front lines, it’s time to kick it up a notch and look at some more complicated aspects of the immune system. First stop? Adaptive immunity!

Questions? Comments? Blease put them below. Even I get tired of talking to myself. ❤


Protein Purification: An Outline

Hello, everyone! After an evening playing Hatoful Boyfriend and watching Hamilton animatics, I’m back to bring you another post on techniques in biochemistry. My last post mostly focused on general techniques for use with DNA, so today we’re taking the next logical step—we’re talking about protein purification!

When I took a month-long biochemistry lab over the summer, wherein we spent all day, every day doing basic biochemistry experiments and complaining about out lot in life, our professor would walk in every morning, smiling knowingly, and say, “Do you have protein yet?”

Our moment of vindication came at the end of our lab. At the end of four exhausting weeks of failed PCR, dying cells, and being way too close to E. coli for comfort, we were able to hold up a gel stained with blue bands and say, “Yes. Yes, we have protein.”

Two years later, I am back in a biochemistry classroom, and as my professor lectures on protein purification methods, I think back to that fateful day, that brief feeling of success before physical chemistry came for my self-esteem. So, now, I ask the question that no one cares about—how do you get protein? And how do you know you’ve gotten it? Well, reader, to know that, we must understand protein purification methods. These methods can be sorted into four basic categories: solubilization, stabilization, purification, and detection.


The first step in any protein purification protocol is to solubilize your proteins, or to get your proteins out o the cells they’re in—whether that’s E. coli or human thyroid tissue—and into solution. Your approach here, of course, depends on where in the cell your protein localizes. Is it associated with a particular organelle? With a membrane? Is it in the nucleus? These are things you need to know, going in.

One way to get protein out of cells is through osmotic lysis of cells In a hypotonic solution. That’s a fancy way of saying that you put a cell in a solution where the concentration of solutes outside is lower than it is inside—water will enter the cell, and the cells will essentially explode.

Of course, this method has its limitations. It’s good for getting as cytosolic proteins, since those will readily spill out into solution once the cell busts open. It’s good if you’re dealing with eukaryotic cells that lack rigid cell walls. However, if these conditions aren’t met, osmotic lysis isn’t likely to help you; you’d be better off with something a little more… violent.

Mechanical lysis is a fairly useful technique, and one that works well if you’re trying to isolate a protein from cells with rigid cell walls (such as plant cells) or from tissues. (For example, in a paper I read on the isolation of thyroid peroxidase from human thyroid tissue, they sonicated the thyroid tissue and then filtered it several times to get at their protein.) This is a group of methods including sonication, homogenization, and using a French press. What all these methods have in common is that they mechanically break apart cells, which is useful if your cells are… well, not susceptible to breaking by other methods. Although it is useful, it can also be destructive—your best bet is to use the least violent method possible.

If you’re wanting to isolate specific cell organelles, that can be done using centrifugation, especially if you manage to set up a density gradient so that proteins of specific densities migrate to certain places. However, this isn’t very useful if your protein of interest is going to end up in the pellet instead of the supernatant, so that is something you have to take into account when you do your experiment.


Getting your proteins into solution, of course, is only half the battle. Once they’re there, naked and vulnerable to their surroundings, you have to figure out how to protect them from damage and destruction. You see, cells are veritable powerhouses of enzyme production, and among the enzymes typically inside the cell are—you guessed it—proteases. Enzymes whose entire job is to… kill proteins. Until they’re dead.

If your whole purpose in conducting an experiment is to get protein, this is not good. Fortunately, you can add protease inhibitors of different types to stall their activity (unless your protein of interest is protease resistant, in which case, let ’em have at it, I guess). Metal chelation agents such as EDTA are a good choice, since most of these are metalloenzymes, but there are also specific cocktails for specific proteases.

This isn’t, of course, the only challenge your proteins face—you also have to worry about denaturation and microbial growth. To avoid denaturation (as well as to thwart any remaining proteases), keeping your proteins cold—but not freezing!—is a good solution. Adding antimicrobial agents such as sodium azide can help prevent microbial growth.


So your protein is in solution, and you’ve got it fairly safe from harm. Now you need to actually separate it from… everything else that’s still floating around in there. There are two general ways that you can do this: you can either do non-chromatography things, or chromatography things.


A few protein purification methods don’t rely on chromatography, and are therefore theoretically simple to carry out. There are lots of ways that you can vary these, but the three basic types are salting-out, pH variation, and dialysis.

Salting-out basically works off of the principle that proteins tend to precipitate out of solution at certain ionic strengths, and particular proteins can precipitate out at different ionic strengths from those around them. Ionic strength is a function of both the charge of an ion in solution and its concentration, and thus, varying ionic strength involves varying salt concentration. Because most proteins tend to precipitate with more salt, this process is called salting-out.

Isolating proteins based on pH takes advantage of the fact that different proteins have different isoelectric points, or points where their total charge is zero. If the pH of your solution is equal to the pI of your protein, the protein’s solubility in that solution is minimal, and therefore it tends to aggregate.

Finally, dialysis is an “old-school” method that involves taking advantage of semipermeable membranes and simple diffusion, the likes of which you learned about in high school biology. If you put your protein in a bag with pores too small for it to move through, then put the bag in a solution with low solute concentration, things in your protein bag will move through the small pores out into the bigger solution, but your protein will stay put. This has limited usefulness, though, especially because it tends to dilute your protein sample.


Although the methods mentioned above are viable, almost every protein purification protocol I have read has involved chromatography. These methods are typically more sophisticated, but have the payoff that they are reasonably successful at isolating protein at a high purity, especially when used sequentially. There are three basic chromatographic methods that I will mention, which of course can be subdivided into oblivion: size-exclusion chromatography, ion-exchange chromatography, and affinity chromatography.

Size-exclusion chromatography separates proteins—you guessed it!—based on size. In this kind of chromatography, you have a column that contains hollow beads with pore sizes too big for certain proteins to get through. When you put your protein solution through it, larger molecules don’t get caught up inside the beads, so they tend to move through quickly. Smaller molecules, which have the beads to contend with, take longer to elute. In theory, what this means is that larger proteins are separated from smaller ones, which you can confirm by taking your aliquots and running them through a spectrometer to test their absorbance.

Ion-exclusion chromatography, shockingly enough, separates proteins based on charge. Beads in these columns carry charges that attract proteins of opposite charge, and therefore those proteins don’t elute through the column with the same-charged and neutrally-charged species. If your protein is an oppositely-charged species (which it probably is, if you’re doing this), you let everything else run through, add salt to knock your protein off the beads, and collect your purified sample.

Affinity chromatography encompasses all of the chromatography methods that are specific, interesting, and probably expensive. This involves taking advantage of specific properties of your protein of interest in order to separate it from a mixture. For example, making up beads with iron is a good way to isolate hemoglobin, the stuff in your blood that likes, you know, iron. Immunoaffinity chromatography also falls under here, and involves making up beads of an antibody specially tailored for your protein. This is expensive, but useful, especially if you’re looking to separate similar isoforms of an enzyme. (The paper I read used this to catch their thyroid peroxidase, which is, incidentally, the target of immune targeting in the autoimmune disease Hashimoto’s—a disease I am all too familiar with.)

All right, so we’ve gotten our protein into solution, stabilized it, and purified it. Can we go to our biochemistry professor yet with smug grins and an overwhelming sense of relief? Of course not! Why? Because what is he going to say when you tell him you have protein?

That’s right. “Prove it.”


As far as methods go, protein detection methods are fairly straightforward. Although they can be modified and tweaked, again, into oblivion, they tend to follow a basic format. The detection methods that we discussed, in general, were ELISA, Western blotting, electrophoresis, and paper chromatography.

ELISA, or Enzyme-Linked Immunosorbent Assay, is a method used to detect antigens and, more generally, protein after purification. The antigen of interest is bound to a plate, and that plate washed with antibodies that will bind the antigen and produce a signal. Do you see signal? Then you see protein. That’s pretty much the gist of it.

Western blotting works similarly to Southern blotting, except it’s with, you know… proteins. You tend to see fewer artifacts with this than with ELISA, although it isn’t a quantitative method. (Not that ELISA is, despite many scientists’ efforts to make it so.)

Electrophoresis with proteins works the same way it does with DNA—coat the proteins with something so they all have the same charge, and run electrical current through a gel so they migrate based on their size. This general setup is called SDS-PAGE, or sodium dodecylsulfate-polyacrylamide electrophoresis. (SDS is the detergent used, and PA is the stuff that makes up the gel.) If you do it on paper instead of in a gel, it becomes paper electrophoresis.

Paper chromatography is yet another qualitative method that tends to work best with smaller molecules. It’s cheap, so cheap that a form of it—thin-layer chromatography, or TLC—is a staple of every Gen Chem lab. Basically, a nonpolar solvent (such as hexane) is run over a plate made of a polar substance, and dots of protein dotted on the plate move based on whether they prefer the stationary phase (the plate) or the mobile phase (the solvent). In other words, they separate based on polarity.

Phew! Well, that was an awful big pain, but now we’re at the end, and we’re a little bit wiser, aren’t we? Now that we know how to solubilize, stabilize, purify, and detect our protein, we can not only get protein, but prove that we’ve gotten it. Now go, budding biochemists. Go, armed with your dried-out SDS-PAGE gels and your will to prove yourselves worth something. Go, and tell your professors that you’re not entirely incompetent.

Questions? Comments? Yeah, me too. But I’ve gotta go to class. You know how that goes.

DNA: Biochemical Techniques

Well, hello there, everyone! It’s been an awful long time, hasn’t it? Last time I was here, I was writing about Plant Bio as an undergrad, and now, here I am, killing plants as a graduate student. (You think I’m kidding. I’m not. My plants caught a disease and died. The whole time they were dying I was like, “Aah, salicylic acid is at work here,” and yet they still died. Theoretical science, folks.)

I’m back in biochemistry again (in other words, I’m back on my nonsense), and I have a test coming up, so I have returned to my humble blogger origins to bring you an installment of biochemistry two years after my last set of biochem posts. This one is a bit more practical than the last few were, however: we’re going to talk about biochemical techniques to use with DNA!

I don’t know if y’all know this, but DNA is cool.

A helical molecule made of sugar, phosphate, and nitrogenous bases with sequences of four distinct bases spelling out messages is a quaternary system. You’ve probably heard of the bases–adenine, thymine, guanine, cytosine. In the double-stranded helix, a guanine on one side always pairs with a cytosine on the other, and an adenine always pairs with a thymine. This means, if you unzip the helix and write down what you see on one strand (“5′-AACGT-3′,” for example), you can reliably predict what the corresponding sequence would be on the other strand (“3′-TTGCA-5′”).

Your cells take advantage of this by using this four-letter code to write down the instructions on how to make proteins. First, it unzips your DNA, looks at the “template strand,” and writes down the opposite sequence from what it sees in RNA (which can be thought of as single-stranded DNA with uracil in place of thymine). A piece of cellular machinery called the ribosome then uses this transcript to make proteins by reading off the sequence of bases, three at a time (a codon at a time). Each three-letter sequence is associated with one of twenty amino acids, the basic building blocks of proteins; this allows ribosomes to string together sequences of amino acids by reading the sequence of three-letter codes for each building block in the RNA transcript. (These processes of copying the DNA’s message and using it to make proteins are called “transcription” and “translation,” and you should click the links if you want to know more about them.)

DNA (and RNA) is incredible because it directs the synthesis of every protein in our bodies, but it is also amazing because it offers us an excellent opportunity to modify the traits of particular organisms. If you want to change the way that a fly looks or a worm acts, want to make E. coli resistant to an antibiotic or a mosquito die when you tell it to, all you need to do is change its DNA. Cellular machinery and natural organismal growth, given due time, will handle the rest.

But how does one change the DNA of an organism? And after you do it, how do you know you’ve done it? Well, dearest reader, that is where our slew of DNA techniques comes in.

The first tools in our extraordinary toolkit are these amazing little enzymes called restriction enzymes.

Restriction enzymes are a particular class of endonucleases (enzymes that cleave the backbone of DNA) that cut DNA in a particular location. These exist in nature because they are used to kill viruses, which seem to be constantly assailing all domains of life with their tyranny. There are many types of restriction enzymes, classified as Type I – Type V. Type I and Type III cleave DNA either far away from their recognition sites or nearby, respectively. Type IV cleave methylated DNA. Type V cleave DNA using RNA as a guide–this class contains the extremely fascinating (and recently controversial) cas9 enzyme used by the CRISPR-Cas9 gene-editing system. (Please click that link. It’s a song. You’ll love it.) However, Type II restriction enzymes cut DNA at very particular sequences in DNA, making them extremely useful in DNA-editing.

Most of these enzymes recognize DNA sequences that are called “palindromic sequences.” (If you watched the A Capella Science video, you’re already ahead.) These are sequences of DNA that read the same on both strands. Taking into account that DNA strands run in opposite directions (and each strand is read from the 5′ end, or end with a free 5′ phosphate, to the 3′ end, or the end containing a free 3′ OH group), for example, the sequence “5′-GAATTC-3′” on one strand is “5′-GAATTC-3′” on the other strand—they’re the same!

Some restriction enzymes will cut this sequence right down the center, generating two pieces of DNA with flat ends, called “blunt ends.” Blunt ends are not extremely useful in comparison to “sticky ends,” which result when the enzyme cuts the double-stranded DNA in a staggered way. For example, in the scenario described above, a sticky end would result if the enzyme cut between the G and A on both strands. This would result in four exposed bases on the end of each fragment—AATT—that were not paired with anything.

Why are these called “sticky ends?” Because it is easy for these ends to “stick” back together by pairing up those exposed base pairs on the ends of their fragments. This is very useful if you’re interested in sticking a new piece of DNA in a sequence, if you think about it. Just cut your DNA with the same enzyme you used to cut the DNA strand you want to stick it in, put them in together, and let nature do the rest. Some of your DNA will accidentally get put stuck in the middle of the original strand when it sticks itself back together. (I realize this is difficult to visualize, so here’s a Wikipedia article to help you out.)

Okay, so we have the tools with which we can stick our DNA into another piece of DNA, but what use is that? Well, you probably want to get it into an organism such as E. coli, either to express it (if, for example, you want to isolate the protein) or to replicate it. This, dear reader, is why we have vectors.

The simplest kind of vector is a plasmid vector, a circular piece of DNA into which you stick your gene of interest. These are useful because bacteria, such a our favorite lab organism, E. coli, already make use of these. In bacterial cells, plasmids are circular pieces of DNA separate from the bacterium’s main chromosome that contain genes useful under specific conditions—for example, antibiotic resistance genes. (Bacteria, in fact, share these with each other in nature through a process called horizontal gene transfer, which would be really convenient in humans.) It isn’t hard to imagine, then, that the easiest way to get a new gene into E. coli is to put it on a plasmid.

Plasmids typically contain a polylinker, a selectable marker, and a reporter gene. The polylinker is a site containing recognition sites for many restriction enzymes. This makes it easy to insert your gene, in theory—cut it with a restriction enzyme, cut your plasmid with the same restriction enzyme, and have it insert at this site. The selectable marker gives you a way of selecting for only cells containing your plasmid (“transformed cells”), and a reporter gene gives you a way of knowing that your DNA got put into the plasmid.

So, for example, say that you want to insert a gene for a particular protein into a plasmid. You cut the plasmid and your gene with the same restriction enzyme, and you hope that your gene got integrated into the plasmid. Then you put your plasmids in E. coli and hope that some of the cells get plasmids, and some of those plasmids have your gene in them. That’s a whole lot of hoping.

However, if your plasmid contains a gene for ampicillin resistance, when you plate your E. coli cells on ampicillin, only those containing your plasmid will grow. (This is the selectable marker.) Taking it a step further, if you put your polylinker (your gene insertion site) in the middle of a gene that, for example, produces a blue substance in the presence of X-gal, you will be able to tell which colonies have your gene by which colonies aren’t blue if you plate them on X-gal. (No blue means no functional gene, which means that your gene must be in there somewhere. This is the reporter gene.)

As elegant as this all is, it doesn’t work in more complicated cases. BACs (bacterial artificial chromosomes) and YACs (yeast artificial chromosomes), in addition to bacteriophage vectors, are used for larger fragments of DNA. These are very interesting, but we didn’t discuss them much, so I will move on.

So we’ve talked about getting your DNA into an organism, but what about detecting your DNA? How do you know that you even have the sequence that you’re looking for? Well, thank Dr. Edwin Southern, because he’s got a blotting technique.

Southern blotting is a technique that separates DNA fragments by size and then uses probes to visualize their presence on nitrocellulose gel. First, DNA fragments are cut up using restriction enzymes and separated through gel electrophoresis. (In gel electrophoresis, DNA fragments are placed in a gel and a current is applied to the gel. The charged DNA fragments migrate through the gel, but the larger ones move slower.) These bands are then transferred to nitrocellulose gel, and a probe that will attach to your gene of interest is incubated with the gel. If your probe is labeled—which it should be—then it is simple to find which band contains your DNA of interest, and, more simply, whether your gene of interest is present at all.

An extremely similar process is used to detect RNA. This process, hilariously enough, is called northern blotting, because scientists are cute.

(In fact, there’s a whole compass of these. Southern is for DNA. Northern is for RNA. Western is for proteins. Eastern is for post-translational modifications. Southwestern (?? this is so cute) is for DNA-binding proteins. (Ugh. Scientists. Why. This is worse than strange and charm.)

Finally, although it’s a little out of place, it seems deliberately incomplete to get this far without mentioning PCR, perhaps the best DNA-related technique of all time. PCR, or Polymerase Chain Reaction, is a process that allows you to quickly get a large amount of DNA out of a very small amount.

Basically, a fragment of your DNA of interest is placed in a solution with all of the necessities for DNA replication—DNA polymerase, primers, and nucleotides, specifically. The DNA of interest is melted (the strands are separated), the primers anneal to each strand, and DNA polymerase synthesizes new complementary strands. Then the DNA is melted again, and the whole thing starts over. You can see how this would exponentially increase your DNA—this is a technique that is extremely useful for analyzing very small DNA samples.

(The whole “melting DNA” thing involves heating and cooling the apparatus several times. This used to be a problem, because most DNA polymerases get “killed” at the extreme temperatures required to melt DNA, and new polymerase had to be added each cycle. However, this problem has been circumvented by using the Taq polymerase, the DNA polymerase of the heat-loving bacterium T. aquaticus, which works just fine at high temperatures.)

Phew. Okay. Well, this post was more complicated than it needed to be. Sorry, my brain’s going in circles. I’m not even going to do a proper sign-off, because I’m just going to take a break. Get some Dorito’s, drink some water, play a few rounds of Hatoful Boyfriend… and then I’ll be back.

Questions? Comments? Yeah, me neither.

A Survey of Phytohormones

Good afternoon, everyone! (It certainly doesn’t feel like afternoon. I woke up two hours ago, and I’ve been novel-writing, as promised, since, so my perception of time is a bit wonky.) I’ve returned from the depths of my own despair to bring you another post on phytohormones, courtesy of my plant biology notes! Buckle up, though—this one’s gonna be a bit of a doozy.

(If this gets a bit incoherent at times, I apologize. I’m listening to The Bus is Late in the background, and I might get strangled. Sapphire hates it. Well, you know what, Sapphire? COME IN HERE AND FITE ME LIKE A—)

Ahem. Anyway. In my last post, I gave you a brief introduction to phytohormones, complete with some passing remarks about their synthesis, transport, and perception. Now, it’s time to get down and dirty and talk about things in more detail—that’s right, we’re getting specific up in here.

You remember that list of hormones that I gave you before? Well, just watch—I’m about to blow through it in less than an hour. (That’s writing time. Hopefully, it’ll take you like… ten minutes. Less, if you’re a freak like Finn.)

Here we go!


Like most stories in biology, the story of auxin begins with Darwin.

In the 1890s, Darwin studied phototropism in plants. “Phototropism” refers to a plant’s ability to “see” light, and it’s something you’ve already seen in action—when plants grow, they grow towards sunlight.

Darwin had the bright idea to cut off the tip of a plant’s coleoptile—a protective sheath in germinating cereal plants—to see what happened. (He also covered only the tips of the coleoptiles, so that they weren’t exposed to light.) He found that, unlike control plants, they exhibited no phototropism—they made no attempt to bend toward a light source. The conclusion, from this data, was that the signal that gave plants the ability to perceive sunlight must travel from the tip of the plant downward.

Forty years later, auxin (indole-3-acetic acid, or IAA) was purified and shown to promote growth. (It’s actually really cool how they did the experiment—they got auxin to soak into agar blocks, and then placed them on decapitated plants. The ones with auxin blocks behaved like they hadn’t had their heads chopped off.) It was also shown that auxin could only move in one direction through the plant—from tip to root. (Remember, we called this “polar transport.”) Using the agar blocks, it was also shown that auxin plays a role in apical dominance; in simple terms, when auxin can travel from the tip of the shoot, branching is suppressed.

In the 21st century, we now have a whole host of molecular tools with which to study auxin. With all these fancy bells and whistles, we’ve managed to pretty much figure this sucker out, from beginning (biosynthesis) to end (signal transduction).


Auxin, or indole-3-acetic acid, is made from indole. (Shocking, I know.) However, most of its biosynthetic pathway use a modified form of indole called tryptophan (an amino acid!). The biosynthetic pathway is very tightly regulated, and is influenced by other hormones and environmental factors (such as temperature or red light/far-red light ratio).

As we said before, hormones can be conjugated to alter their activity. Auxin is no exception. Conjugation to alanine or leucine (nonpolar amino acids) marks it for storage, while conjugation to aspartate or glutamate (basic amino acids) marks it for degradation. (The genes that control Asp- and Glu- conjugation are called GH3 genes, and their overexpression leads to dwarf phenotypes.)


As we established, auxin transport is polar. Specifically, auxin moves from shoot tip to root tip through the phloem (which conducts nutrients from the shoot to the root—remember what we said about the phloem being the plant’s “nervous system?”). This is called “basipetal transport” in the shoot, where it moves from tip to base, but in the root, it becomes “acropetal” (movement from base to tip). At the very tip of the root, auxin turns around and travels up a little again (this time, through transport channels called PINs).


There are two types of receptors for auxin—ABP1 and TIR1. We don’t really understand how ABP1 works yet, but we do get how the TIR1 pathway works. Let’s look, shall we?

TIR1 is an F-box protein, an exchangeable piece of a larger complex called the SCF ubiquitin ligase complex. (“SCF” refers to “SKP1, CUL1 and F-box,” the three components that make up the complex. Ubiquitination is a way that cells mark proteins for degradation. A ligase is an enzyme that attaches something to something else.) When TIR1 is attached, this complex is called the SCFTIR1 complex. Auxin works by being a kind of “molecular glue” that attaches TIR1 to target proteins. Once the target proteins are bound, the ubiquitin ligase complex attaches ubiquitin to the protein, and it is broken down by an apparatus called the 26S proteosome.

Now, you may be asking, how the heck does breaking something down accomplish growth? Well, typical targets of the SCFTIR1 complex are Aux/IAA proteins, which are transcriptional repressors. They live short, crazy lives in the nucleus (their half-life is roughly 6.5 minutes). When they’re broken down, they no longer act as repressors, and the genes they were repressing become activated.


Now that we know how auxin works, we can ask ourselves, what does auxin signaling accomplish? Well, we already said that Darwin found that it plays a role in phototropism. As it turns out, it also plays a role in gravitropism (response to gravity). The “bending” that changes the direction of a root’s growth (so that it’s always going down) results from changes in auxin concentration—more auxin on one side means more cell elongation on one side in relation to the other, which makes the root bend.

More generally, auxin promotes lateral organ initiation and inhibits branching in the shoot, while maintaining stem cells and promoting branching in the root. It also plays a role in embryo patterning and organ development—messing with auxin in embryos results in plants without roots. How’s that for the holy grail of growth hormones?

Auxin also has lots of friends that influence how it does its job. Auxin and cytokinin don’t get along at all (my professors calls them “arch-nemeses”). For example, if you plate plant cells with auxin, you’ll grow only roots, and if you plate them with cytokinin, you’ll grow only shoots. However, if you plate them with both, you get a mess of undifferentiated cells, because cytokinin and auxin are too busy fighting with each other to get anything done. (You can, however, grow shoots with cytokinin and then roots with auxin. Just not at the same time.) Auxin and ethylene are BFFs—the presence of one promotes production of the other. Auxin can also scream loud enough to drown out salicylic acid, prioritizing growth over immune response.

Alright! That’s plenty complicated, isn’t it? Well, we’re just getting started. Let’s keep going—!


Gibberellins get their name from a species of Japanese rice fungus known as Gibberella fujikuroi. Infection with this fungus resulted in a disease called “bakanae disease.” (馬鹿苗, or “bakanae,” is Japanese for “foolish seedling.”) The fungus excretes gibberellic acid, which causes rice plants to rapidly elongate until they cannot support themselves.

Sometime later, it was discovered that gibberellins aren’t just some weird poison made by fungi. Plants make them for themselves! (In fact, Mendel’s “dwarf peas” had a mutation that prevented them from making gibberellins.) Turns out, they’re plant hormones that promote cell elongation by inhibiting inhibitors. Sound confusing? Don’t worry, it only gets worse from here!


The synthesis pathway for GAs is, in my professor’s words, “outrageous.” It starts in plastids, and is made from a precursor called GGPP. Synthesis then moves into endomembrane systems, finally to be exported into the cytoplasm. I could go into detail, but, well, I won’t. Just take my word for it?

You remember when we talked about conjugation, right? Well, GA deactivation can take place through a couple of different processes—oxidation, methylation, or epoxidation. Again, that’s about as much detail as we need. Okay?


Gibberellin transport is a lot less complicated than auxin’s, because gibberellins aren’t picky. They move both ways, although not for long distances. (GAs also move from the embryo to the aleurone, or stored protein, in seeds to stimulate amylase production during germination. Amylase breaks down starch into sugar for the baby plant.)


Gibberellin signaling also uses the SCF ubiquitination complex. Gibberellins bind DELLA proteins (a class of repressor proteins) to GID1. The DELLA protein is then targeted by the SCFSLY1/GID2 complex for ubiquitination and, therefore, degradation.


As you’ve already noticed, Gibberellins have a big effect on the plant as a whole. They promote cell elongation, which allows plants to grow very quickly to avoid, for example, being submerged in flooding. They also promote nutrient mobilization in seeds, which is useful not only if you’re a germinating embryo, but if you’re interested in making beer. (To get malt, they trick those poor embryos into breaking down their starch stores using GAs—rude.) They are also, interestingly, necessary for flower.

Phew. I need to stand up for a minute.


Cytokinins, auxin’s arch-nemeses, were discovered when desperate researchers took desperate measures. In the 1950s, people wanted to find compounds that would increase cell growth in plant cell cultures, so they started putting whatever they could think of in with the cells—including coconut milk. Turns out, there was something present in coconut milk that increased cell growth. What was it? Kinetin, a cytokinin!

Cytokinins, as hormones, do a lot of stuff, but perhaps their most famous function is in delaying senescence (read: delaying death). That got a lot of people thinking, “Hey, we’ll overexpress cytokinins, and our plants will never die!” Turns out, that’s not as great an idea as it sounds.


Cytokinins are adenine-related compounds substituted at the N6 position. The two most notable among them are trans-zeatine (tZ) and isopentanyl-adenine (iP). Starting with ADP or ATP, they’re made in basically all plant tissues (although the location of synthesis in the cell isn’t yet known).

As you’d expect, cytokinins can undergo conjugation to modulate their activity. Reversible conjugation to a carbohydrate marks them for storage, while oxidation marks them for degradation.


Cytokinins are transported through both the xylem and the phloem. However, fascinatingly enough, all cytokinins aren’t present in equal amounts in both tissues. Xylem contains mostly tZ and tZ riboside, while phloem contains iP and tZ. It’s not yet understood exactly why this is, but it’s expected to be, as always, caught up in nuances of signaling.


Unlike the receptors for auxin and gibberellins, the receptors for cytokinins are the membrane-bound components of a two-component system. Without getting into too much detail (that’s what the Wiki link is for), signaling in the cytokinin pathway requires two pieces—a histidine kinase (HK) and a response regulator (RR). Perception at what is called the “input domain” activates the histidine kinase, and a phosphate group is transferred to one of its histidines. This is then transferred to an aspartate on the response regulator’s “receiver domain,” where it causes a conformational change that effects a kind of response.

(If this sounds complicated, it is—my professor, upon learning that few people in the class had taken a microbiology course, facetiously gushed, “Oh, yay, I have the honor of being the first to teach this to you!”)

If you’re asking about the names of the particular components, you’re asking for even more problems. There are three receptor proteins—AHK2, AHK3, and AHK4–as well as five “histidine-containing phosphotransfer factors (HPts),” called AHPs, and twenty-three RRs (ARRs). That’s… a huge freaking mess, so we’re just going to talk about them in general.

The AHKs form transmembrane homodimers, which is a fancy way of saying they’re stuck in the plasma membrane, and they pair up with themselves. When they perceive cytokinins, they transfer a phosphogroup to one of the five AHPs, which then shuttles into the nucleus to pass it to an ARR. That’s it. That’s the basic gist.

If you’re asking, “Plant, why so many of everything?” the answer is essentially, “So I can’t screw it up.” Redundancy is a big thing in biological systems, and it’s really useful—for example, you have to mutate three of the five AHPs before you get a “no-signaling” phenotype. Having multiple copies of everything also lets you toy around with what they do—the many ARRs, for example, can have diverse effects on signaling. (Evolution does this like… a lot. Hemoglobin is a fantastic example.)


As I said before, cytokinins do lots of things. Famously, they play a role in senescence and shoot growth. They also have something to do with nutrient uptake and pathogen responses. As with all hormones, CKs aren’t one-trick ponies… which makes them a pain to study.

As said above, leaf senescence is regulated by cytokinins. In plants expressing cytokinin during times when they’re supposed to be dropping leaves, leaves stay alive longer. However, longevity has its costs—for example, keeping a plant alive for a long time doesn’t do good things to the seed it produces.

Cytokinins also play a role in drought and freezing resistance—overexpression of or pretreatment with CKs improves a plant’s ability to survive drought or freezing. One of the ARRs, ARR2, also interacts with parts of the salicylic acid pathway to improve immune response to pathogens such as P. syringae. You gotta give it props, don’t you?

Okay. We’re basically halfway there. (Woah-oh, dog elected mayor….)

Abscisic Acid

Let’s face it: life is stressful, no matter what kingdom of life you belong to. Hey, it’s cool. Abscisic acid understands. Abscisic acid will protect you. Well, not you, not so much. But plants? Yeah, it’s got plants’ backs.

(Is Tangled too old to reference now? Okay. Well, you get it.)


Abscisic acid (ABA) is synthesized in the plastid and the cytoplasm from zeaxanthin, a forty-carbon carotenoid. The resulting compound can be marked for degradation through conversion to phaseic acid, which is something that happens when you rehydrate a plant that’s got its drought-response engaged. Glycosylation reversibly marks it for storage in the vacuole, which doesn’t surprise anyone at this point. (This is also how it’s transported in general.)


ABA moves in the exact opposite way that auxin does—through polar transport from root to tip, through the xylem. This makes a lot of sense, if you think about it: if ABA is to get the plant to conserve water, it needs to move from the root (“There’s no water down here!”) to the shoot (“Hey, leaves, close those stomata!”). It can also travel short distances through the apoplast.


ABA has sort of unique receptors called PYR/RCAR receptors. Essentially, ABA binds in a complex to PYR/RCAR receptors and AB1 (or other PP2C phosphatases). When PYR/RCAR and ABA bind PP2C, PP2C is inactivated, and it can no longer inactivate another protein called SnRK2. SnRK2 phosphorylates other proteins, such as ion channels and transcription factors, to bring about downstream effects. (Does this “inhibiting and inhibitor” paradigm sound familiar at all?)


ABA plays a role in a lot of different physiological processes, including guard cell response, root growth, dehydration response, and seed development. For example, ABA allows SnRK2s to open ion channels that move ions out of guard cells, which gets water to move out of the cells. The ultimate result is that the stomata close, which decreases water loss.

Similarly, ABA pushes plants to make roots instead of shoots when the plant is experiencing drought. It also suppress root branching (get down there to the water—don’t worry about branches!).

Finally, ABA works with gibberellins (GAs) to control seed germination. While ABA is present in the seed, germination is suppressed, and the seed remains dormant. As ABA tapers off, GA levels rise, and the seed starts to grow.


Ethylene is freaky weird for one reason: it’s a gas. Yup, you read that right. It’s a hormone. That’s also a gas.

Like all good discoveries, it was accidental—it was found that plants grew toward gas lamps. Afterwards, it was purified from ripening apples, which proved that it’s something plants actually make. Which is, you know, cool if you think you’ve stumbled across a new hormone.

This tiny molecule is famous for its role in ripening, but it also plays a role in senescence and cell expansion. Can you think of any ways in which it messing with these things while being a gas might be inconvenient? Because, whether you can or not, you’ve probably experienced it already. Because it is. Darn inconvenient, that is.


Ethylene synthesis is complicated, so we won’t talk too much about it. Essentially, you start with methionine, a sulfur-containing amino acid, and, through a series of steps, make ACC. (You can, therefore, consider both methionine and ACC precursors.) ACC is then converted into ethylene by ACC oxidase. We’re pretty sure this takes place in the cytoplasm. Pretty sure.

This pathway is really tightly regulated, and mostly is dependent upon the stability of the enzymes that carry it out (ACS and ACO). Usually, both are really unstable, but wounding or other hormones make them more stable.

There are actually nine genes for ACS, which each have both unique and common functions. The more of them you knock out, the sicker the plant gets—if you knock out all nine, it’s nonviable. (In other words, plants cannot live without ethylene.)

As you might expect, there exist certain mutants in plants that overproduce ethylene. In these mutants, a protein called ETO1 is mutated. ETO1 is an F-box protein, and, when it’s part of the SCF complex, it targets ACS for degradation. Without it, ACS isn’t degraded, and more ethylene is produced than is necessary.


Interestingly enough, ETR1, ethylene’s receptor, was the first phytohormone receptor ever identified. ETR1 is a membrane-bound receptor that initiates a pathway very similar manner to the cytokinin two-component system.


Like everything else, ethylene meddles with an awful lot of plant processes. It’s well-known for promoting ripening and senescence, but it also plays a role in shoot/root elongation, flooding responses, and pathogen responses.

For example, ethylene puts a damper on elongation while promoting swelling while in the dark. This is useful because it’s a way for a germinating seed to deal with obstructions in the soil—the resulting “triple response” causes the shoot to thicken and form a hook that can help it respond to impediments. It’s also important in the immune system—with lowered amounts of it, plants get sick.

The most famous function of ethylene, of course, is in ripening and senescence. Ethylene dictates when the petals fall off your flowers, as well as when you apples, bananas and tomatoes ripen. This is inconvenient, because it means that ripening in one fruit can set off ripening in another. (You can use it for good, though—next time you buy some green bananas, store them with an apple. They’ll ripen right up.)

Alright, I’m exhausted, but we’re almost there. Onward, to my favorite hormone!

Salicylic Acid

Humans get lots and lots of useful things from plants, and salicylic acid is a perfect example. It turns out that, by complete coincidence, what functions as an immune hormone in plants functions as a painkiller in animal. Salicylic acid is named for Salix alba, or white willow—willow bark is used by animals and humans alike as a natural remedy for pain and fever. (When you acetylate it, you make acetylsalicylic acid, which is also known as Aspirin—and also named for a plant. You can’t escape plants. Don’t even try.)


Salicylic acid is made using two completely separate pathways, one in the chloroplast, and one in the cytoplasm. (The chloroplast supposedly accounts for 95% of SA production, although my professor contests those numbers.) In the chloroplast, it’s made from chorismate (a key enzyme is ICS), whereas in the cytoplasm, it’s made from phenylalanine (key enzyme: PAL).

SA undergoes lots of conjugation, too, because why wouldn’t it be complicated? [salts] Methyl-salicylate is its transport form (which we’ll revisit), while glucosyl esters are its inactive storage forms.

Salicylic acid synthesis is instigated by, as you might expect, stress and pathogens. Salicylic acid primarily deals with biotrophic pathogens (pathogens that feed upon living tissue, such as bacteria). They respond, therefore, to receptors that detect flagellin, a component of bacterial flagella.


NPR1, which my professor said is named for the radio station, is a strong candidate for the salicylic acid receptor. (The other candidates are NPR3 and NPR4. While my professor has an opinion, she claimed she wouldn’t drag “thirty-five innocent students” into her battles.) It usually oligomerizes by formation of bisulfite bridges between monomers, linking up many small pieces into something big and bulky. However, in the presence of SA, the environment becomes reducing, and those bonds break. The monomers are free to move into the nucleus, where they can affect transcription in some way.

The targets of salicylic acid are a group of genes called “PR (pathogenesis related)” genes. Increased salicylic acid ramps up expression of these genes, which results in an immune response. Nifty, eh?


Salicylic acid plays a huge role in plant immunity, including systemic acquired resistance and hypersensitive response. It also functions in response to abiotic stress, and plays around with seed germination, flowering and senescence. Because of course it does.

The hypersensitive response describes a mechanism by which a plant manages to “quarantine” infected cells. Basically, this involves killing infected cells and the cells around them so that pathogens can’t travel. Systemic acquired resistance, on the other hand, is the mechanism by which an infected tissue alerts other tissues of the presence of a pathogen, essentially urging them to prepare themselves. In both cases, salicylic acid appears to have an important role.

Okay, I’m seeing double by this point. After writing this post, I think I’m going to need a salicylic acid derivative…

Jasmonic Acid

You know how nice jasmine smells, don’t you? Because you’re cultured, unlike me? Well, turns out that lovely smell is a plant hormone called jasmonic acid. That’s right. Sniff those plant hormones. Because that’s not weird.


Jasmonates are made from alpha-linolenic acid, an unsaturated fatty acid commonly found in plant cell membranes. Synthesis starts in the plastid, then moves to the peroxisome, and finally to the cytoplasm. Because plants, as we’ve learned by now, like to make a mess of things.

Jasmonates have two notable conjugate forms—Ile-JA and me-JA. Conjugation to isoleucine (a nonpolar amino acid) through the enzyme JAR1 results in the active form of the hormone. Methylation produces a stable form that can be more easily transported (and stored, according to my professor, who apparently worked in a jasmonate lab. “It smells great.”)


Ile-JA is recognized by COI1-JAZ, which are coreceptors. COI1 is an F-box protein, which means, you guessed it, it’s messing around with SCF. JAZ is a repressor that typically prevents MYC2, a transcriptional activator, from, um, activating transcription. When jasmonic acid binds to COI1, however, the SCF complex targets  JAZ for breakdown, and transcription is activated.


Jasmonates act primarily in response to necrotrophic pathogens (pathogens that feed on dead tissue, such as fungi) and insects. They also, of course, play roles in development, but are we surprised at this point?

A primary way that jasmonates protect the plant against herbivory is through the production of proteinase inhibitors. When insects eat plants, they, unsurprisingly, produce proteinases in their gut to break down any protein they eat. The plant can secrete proteinase inhibitors, which prevent proteinases from working properly. This can weaken or even kill insects feeding on the plant.

They also communicate with salicylic acid, although salicylic acid is a bit of a bully when it comes to playing with other hormones. If both jasmonates and salicylates are present, salicylate signaling wins—in other words, if a plant is infected with both a fungus and a bacterium, it will fight off the bacterium before it fights off the fungus. (Which I don’t understand, because, quite frankly, fungi are terrifying.)

Wow, okay! I’m exhausted. This took a lot longer than I thought it would, but it’s also like… 75% of my test material. I guess, in the long run, the time investment paid off. I’m gonna skedaddle, though—I’m seeing crosseyed.

Have fun, friends!

Intro to Phytohormones

Good afternoon, everyone! It’s a lovely, cold, sunny day here in the middle of Alabama, and I’m coming to you live with the last bit of studying that I’m going to get done before I go keyboard-mash my way to a SciFi novella. (What can I say? Old habits die hard.) As Junhi and co. studiously pore over their physics homework on the library table across the divider from me, I sit here, fangirling over plant hormones.

Yes, you read that right, fangirling. (Have I mentioned that I want to be a phytochemist? Well, I do. I want to be a phytochemist.) Plant hormones are great.

“Agree to disagree, June!” you may be tempted to say. If you’re more like my roommate, Sapphire, you may be saying something more along the lines of, “That’s it, this is the final wedge between us.” Oh, come on! Before you make such hasty, sweeping statements, allow me to make my case.


I said before that plants are like aliens among us, and I continue to stand by that statement. They remain in a single spot for their entire lives. They can produce food from sunlight. They can regrow severed organs (or regrow from severed organs). The can–shudder–synthesize aromatic amino acids.

As different as plants are from humans, some things in biology, apparently, just work. Hormones are one of those things. Mammals have many hormones, as I’m sure you’re aware (my pet hormone is T3), that control many, many processes in our bodies. Plants, as it turns out, have them too.

Because we’re elitists, we’ve given plant hormones their own names–phytohormones. There are a great number of them, but for our purposes, we’re going to stick with seven: auxin, gibberellins, cytokinins, ethylene, abscisic acid, salicylic acid, and jasmonic acid.

You may recognize some of those (salicylic acid looks suspiciously like something you have in your bathroom, doesn’t it?), or you may not. Doesn’t matter. In the coming days, I’ll be publishing a blog post going over each of their structures and functions. Right now, however, it’s important to understand phytohormones in a general sense.

Phytohormones play a role in pretty much every part of a plant’s life. Embryogenesis? Check. Germination? Check. Cell division and elongation? Check. Cell differentiation? Check. Organogenesis? Yup. Sex determination? Yup. Reproduction? Yep. Stress response (abiotic and biotic)? Yepperino.

In fact, most hormones, when you really look at them, do most things, which is incredibly frustrating if you’re an aspiring biochemist. (“Ah, salicylic acid is an immune hormone! Wait, what are you doing messing around with seed germination? Uuugh…”) Thankfully, there are other commonalities, too, besides the processes that they like to muck around with.

For example, all hormones have to start somewhere–in other words, they have to be synthesized. Hormone biosynthesis is really tightly controlled, and many hormones can be conjugated with (attached to) other substituents (amino acids, carbohydrates, etc.) to change their activity. Sometimes, conjugation activates them, but other times, it marks them for storage or breakdown. The effect of the marker depends on the hormone.

Hormones also have to be transported, if they’re going to be of much use. (How helpful is it that the root can scream, “ASDFAKSDJ WE HAVE NO WATER” if the leaves can’t hear it?) Hormones can be transported through the vascular tissue either toward the shoot (through the xylem) or toward the root (through the phloem), or they can move across cell membranes and through special transporters. (Transmission through the vascular tissue is especially fascinating, because there are some hormones that only move in one direction–we call this “polar transport.”)

After a hormone reaches its destination, it must be detected. This is the job of particular proteins called receptors, which bind the hormone and end up fiddling with something else in the cell in response. In some cases, hormone binding results in phosphorylation of other proteins, which alters their activity and results in changes in gene transcription. Other times, hormone binding results in the breakdown of an inhibitor protein, which allows for the activation of the process the inhibitor was, well, inhibiting. (In my notes, I’ve written, “Destroy something to send a message–hecka,” and that’s pretty much how I feel about it.)

The resulting responses are diverse. Hormones can alter cell proliferation, elongation, and differentiation. They can promote or suppress branching in the shoot or root. They can open or close ion channels (which is important for, say, closing stomata in response to drought). They can trigger germination, or an immune response, or death. The possibilities are endless.

That was pretty convincing, right? You see that this isn’t hard, and it has the potential to be cool? (Because it does. It really really does.) You’re not gonna bail on me?

Awesome, because we’re just getting started.

You do you, fam.


Plants: An Introduction

Howdy, friends! I’ve got roughly two hours before my plant biology test, and I’ve returned to write a brief intro to my test material for your enjoyment (or lack thereof). This post won’t be long and complicated, but it will answer Sapphire’s constantly-reiterated question of, “Why the heck would you choose to study plants?” It’ll also give you some background to better understand the post that logically comes after this, Plant Growth.

Plants are pretty much the most underappreciated thing on the planet, if you exclude phytoplankton and like… bees. They’re all around us, photosynthesizing, using the CO2 we dump into the atmosphere (by respiring and burning fossil fuels), turning it into sugar using sunlight, and pretty much just being chemical plants for materials that we literally would die without.

Do you like to eat? Well, in case you weren’t already aware, literally all of the energy we get started in—you guessed it—plants. The existence of Kingdom Animalia depends on plants taking energy from light and turning it into sugar.

Do you like clothing? Yup, you can thank plants for anything cotton on your person.

Do you like shelter? Yeah, wood is a thing. And it comes from plants.

Do you like medicine? Go hug a tree. (Specifically a willow tree, if you’re fond of aspirin.) Before we started making and modifiying medicines in pharmaceutical labs, most of them came from plants.

It doesn’t stop there, either. Plants are sources of lots of other molecules we use every day that we don’t generally think of as drugs (vanillin, caffeine). They clean CO2 out of our atmosphere (although we’re moving faster than they can, at this point…). It was with plants that we discovered genetics (Mendel’s peas, anyone?), viruses, and oh yeah, cells.

Just as our past and present are dependent upon plants, so is our future. Malnutrition, which affects billions of people globally, might be remedied with crops genetically modified to contain supplementary vitamins. (Golden rice was already implemented to treat Vitamin A deficiency, and it worked—it worked astoundingly well.) GMOs ([police sirens], yes, I know) are also the most viable response to the problem of growing global population and decreased crop yield due to climate change. Some scientists are even considering engineering edible vaccines from plants—can you imagine getting pertussis immunity by eating a banana instead of getting a shot?

Even if you’re one of those people who doesn’t think climate change is a problem or that vaccines are anything to celebrate (I’m not judging you), plant research is promising in other fields. Biofuels, for example, could become a renewable energy source that replace fossil fuels. Plant diseases can be treated and eradicated, which is, of course, good for crop yields. And, of course, plants are still a premium source for therapeutic drugs.

All in all, plants are basically the greatest thing to happen to planet Earth (unless, of course, you were an anaerobic organism that was alive when photosynthesis came on the scene), and we should study them. So, let’s get on that! And what better way to learn about something than through its anatomy, amirite?

We’re just going to talk about vascular plants right now, because there’s a heck-ton (that’s an English unit) of diversity in Plantae, and we quite frankly don’t have the time. Vascular plants are already complicated enough, so let’s just… stick to that.

Plants have two parts, essentially—a shoot system, and a root system. The shoot system is everything that’s above ground. The root system is—you guessed it—everything underground.

There are three types of tissues in plants—ground tissue, vascular tissue, and dermal tissue. We’ll talk about them in that order, an order which has nothing to do with complexity.

Ground tissue is… everything that isn’t vascular or dermal tissue. It’s a simple tissue type (made of one cell type), and can be made of one of three kinds of cells: parenchyma, collenchyma, or schlerynchyma. Parenchyma is thin-walled and alive at maturity (certain kinds, chlorenchyma, contain chloryphyll). Collenchyma is thick-walled and alive at maturity (it’s the mechanical tissues under the epidermis in herbaceous plants). Schlerenchyma is thick-walled and dead at maturity—it comes as fibers (flax)  and schlerids (stone cells).

Vascular tissues are the most complex—all contian multiple cell types. Xylem, which conducts water through the plant like a pipeline, is made of of tracheary elements, which are essentially the hollow corpses of cells lined up to make pipes. Parenchyma and schlerenchymatic fibers are also present. Phloem, which is alive, conducts sugars, water, and hormones (sap) through the plant using living sieve-tube elements.

The vascular tissues in plants—xylem and phloem—are aranged differently in each plant part. In the root system, they form the core of the root. In the stem, they’re present in bundles, and these bundles are arranged in rings (unless you’re a gymnosperm, in which case, [vague shrugging motion]). In the leaves, they’re present as the veins that we can all see very clearly.

The dermal tissue, of course, does exactly what you’d think—it forms the “skin,” or epidermis, of plants. It also includes the periderm, the living part of tree bark.

And that’s it, folks! I realize that this post is disjointed, and I apologize for that, but I have to get going, sooo… In the meantime, enjoy this nice weather we’re having (haha), and maybe eat some vegetables for lunch? Just so they feel appreciated.

Supposedly, you might see a random post on inflammation and drugs in the future. If you do, don’t look at me…

Talking About Drugs – Pain and Inflammation

How do you do, fellow chemists? It is I, Cute Science Girl, here to talk to you about the molecules and the atoms. My favorite chemistry is when the element combines with the other element to make… another element? A rock? Some plants or something?

Folks, I have a confession to make. I know nothing about chemistry. In fact, that last paragraph was entirely a lie. I am neither cute nor a girl, and I sure don’t know much about science. I am merely a sad, tired nursing student staying up way later than he should be on the eve of a pharmacology test.

Some of you guys may know me as Finn from June’s other (rarely updated) blog, Time For a Misadventure. She’s been asking me for a while now to share some of the stuff I’ve been learning in my classes on this blog, and with a big test coming up I figured it’d be a great time to start! I’m not nearly as good as a writer as June, but I hope that someone finds a bit of this interesting!

With all that said, let’s get cracking!

Things to Know About Pharmacology

Before I can really start talking about various drugs and stuff, there’s a couple things that you should know about me and pharmacology.  I’ve compiled a nice bulleted list for you guys:

  1. I’m really bad at it.
  2. I’m not a health care professional. Yet.
  3. I’ll probably be using some fancy-shmancy medical terminology here. If you don’t know what it means, that’s okay – I probably don’t either. I’ll try and explain it in simple terms for both of our sake.
  4. Did I mention I’m kind of bad at it?

With that out of the way, it’s time to talk about some simple terms!

To kick things of, it’s important to understand how a medication works and is processed by the body. The absorption of a drug is, to put it simply, the process of getting the drug from wherever it’s been administered to the bloodstream. Medication can be administered by kinds of different routes. The most common ones are by mouth (abbreviated as PO), as an injection into your muscles (IM), as an injection into your subcutaneous tissues (SubQ), and directly into your veins by way of an IV (oddly enough, abbreviated as IV). Of these, the slowest route for administration is the mouth, and the fastest is by an IV.

The next step in the drug process is distribution. Distribution is the transportation of a drug from the blood stream to the place it needs to go. The medication does this by binding to transport proteins and hitching a ride throughout the body. The most common protein used for this process is albumin.

It’s important to note two things regarding transportation. Firstly, very few drugs can reach the brain. The blood-brain barrier acts as a filter, only allowing very small particles past. On the other end of the spectrum, almost all drugs can pass through the placental barrier. This is why so many medications aren’t supposed to be taken by pregnant women. It’s best to assume that if mamma is taking a drug, baby’s taking it too, for better or worse. 

Now, back to the drug process. The next link in the chain is metabolism. This is how a medication is converted from it’s pharmacologically active form into something more water soluble and easily excreted. Mostly likely, this is going to be done by your liver. When a drug is metabolized (I.e, broken down), it forms pieces called metabolites.

Lastly, the drug is excreted from your body, most likely by the kidneys (urine). Though the kidneys are the primary organ for excretion, others also can take part in the process. Some drugs are excreted by the lungs when you exhale. Some are excreted by your skin when you sweat. A lot of drugs are excreted in breast milk, which is yet another reason why pregnant or nursing mothers can’t take many different medications. By now, I bet you’re sick of the word excreted

Moving on, let’s talk about something exciting – controlled substances! There are 5 categories for controlled substances, called “Schedules.” The lower the schedule, the more likely a drug is to be abused, and vice versa. I like bulleted lists a lot, so here is another one for you guys:

  • Schedule 1: Drugs that are easily abused that have no medical benefit. Ex: Ecstasy
  • Schedule 2: Drugs that are easily abused, but have specific medical use. Ex: Adderal
  • Schedule 3: Drugs that are less likely to be abused, but can lead to physiological and psychological dependence. Ex: Steroids
  • Schedule 4: Low abuse potential, and have accepted medical use. Ex: Lorazepam
  • Schedule 5: Very low abuse potential. Ex: Lomotil

In the hospital setting, controlled substances are a bit of a nuisance. Every time you draw up a controlled substance, you have to count all the medication in the drawer and report how much is left. At the end of  the night, every single controlled substance is counted. If there is a discrepancy between how many meds are actually present and how many the computer says are supposed to be present, then everyone gets drug tested.


And no one goes home until the tests are finished. This takes a long time. After a 12 hour shift, absolutely no one wants to deal with several additional hours sitting in the hospital keeping them away from a warm bed. Needless to say, nurses are very, very careful with controlled substances. Or at least they’re suppoooosed to be, but that’s another blog post entirely!

The next thing I want to talk about is pregnancy categories. Much like controlled substance schedules, a drug can fall into 5 different categories depending on its effect on pregnant women. Time for another bulleted list, folks.

  • Category A: The drug is safe. Animal and human drug trials have shown no risk towards the fetus.
  • Category B: While animal trials have shown no risks to the fetus, there hasn’t been any trials on its effect on humans.
  • Category C: Animal trials of the medication have shown evidence of risk to the fetus, but again, no trials on its effects on humans. The benefits the drug may have on the mother must be weighed against the risk  it has hurting the fetus. Most drugs fall under this category.
  • Category D: There is an evidence of risk to the fetus when taking this medication. Again, the benefits this drug may have on the mother must be weighed against the risk it has on hurting the fetus.
  • Category X: A drug so dangerous, the medical professionals skipped forward 16 letters to appropriately name it. With Category X drugs, the benefit the drug may have on the mother never outweighs the risk it has on the fetus.

A Category X drug would have what we call a Black Box Warning (BBW) for use in pregnant women. With a black box warning, the FDA has deemed that a drug has a high risk for causing death in specific populations or situations.  While a medication may be completely harmless to one person, it could kill someone else. For example, the drug levothyroxine has a FDA mandated black box warning for usage as a means of treating obesity. While levothyroxine does an exceptional job at boosting your metabolism, if you take it for weight loss while you have a fully functional thyroid gland you’re going to have a bad time.

Now I want to talk about a drug’s half life. This is the amount of time it takes for a drugs concentration in the body to decrease in half. Simple enough, right?

Well here’s the tricky part regarding half life. Let’s say you administered 40mg of a drug that has a half life of 12 hours. That means that 12 hours after administration, half of it should have left the body, leaving 20mg. Now, let’s jump forward another 12 hours. The drug should be completely eliminated from the body, right? If only it were that simple! In reality, the concentration of the drug would be at 10mg. Fast forward another 12 hours, and it would be at 5mg, another 12, and it’s 2.5mg, and so on.

See what’s happening? Every 12 hours, the drug’s concentration in the body is halving. Thus is the problem with half life: It takes an unbelievably long time for a drug to leave the body. If you want to determine how long it will take for a medication to leave the body, the down and dirty way to do it is to multiply the drug’s half live by 3. This might not seem too bad for medications that half a short half life, but what if a drug has a half life of 30 days? You’re going to be waiting a long time for that medication to leave the body.

Lastly, I got a pop quiz for you guys: What’s the difference between the two drugs Tylenol and Acetaminophen? Just take a guess. Here’s the correct answer: they’re exactly the same thing!

In the pharmacy world, ever drug has a Trade Name, aka the brand name, and a Pharmacological Name, aka the generic name. In order for a generic drug to be sold, the Federal Drug Administration has to approve it. In order for it to be approved, it has to be exactly the same (or more technically worded, biologically equivalent) as the name brand. 

Now when it comes to herbal drugs, the sky’s the limit. While there is an organization out there that can approve an herbal medication as having the same therauputic effect as a similar drug, it’s not required by any means. 

So next time you go to the drug store and plan on picking up some headache medication, feel free to save a couple bucks and get the generic instead! And if you’d rather take some herbal medications, make sure you read the whole label!

And that’s the basics! Whew. That’s a lot of information, isn’t it? Well buckle up, buttercup: We haven’t even begun. Trust me. I’m already crying. I recommend some coffee.

Pain and Inflammation: Concepts

“Four pages in and it’s finally time to start talking about some medications!” …Is what I’d like to say, but before we can talk about pain meds, we gotta talk about the concepts of pain and inflammation for a bit. I know, it’s a bummer, but bear with me. This stuff is important!

Firstly, we need to find a good definition of pain.According to the International Association of the Study of Pain, pain is defined as follows:

“Pain is an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage.”

That sounds all well in good, but us here in the Nursing field have another definition of pain that’s a bit easier to put into words:

“Pain is whatever the patient says it is.”

In a nursing-patient relationship, the expert on pain isn’t the doctor, it isn’t the nurse, and it isn’t the pain association. It’s the patient. If I were to come into the hospital complaining about how my pharmacology class is causing me severe pain (which it is, emotionally and physically), the staff may roll their eyes a bit, but they’d believe me. Pain is such a subjective thing that it is nearly impossible to tell if someone is or is not experiencing it.

The management of a client’s pain depends on several factors, such as the nature of the pain, the location of the pain, the severity of the pain, and whether or not the pain is radiating throughout the body. Pain can be managed through non-pharmacological means such as physical therapy, behavioral changes, acupuncture, or even electric shocks. If necessary, pharmacological interventions, called analgesics, may be needed.In severe cases, surgical treatment may be required.

It is important to note that it is best to take pain medications when the pain is relatively minor . It is significantly easier to treat pain before it gets bad.

Everyone experiences pain differently. An individual’s pain threshold is the level of stimulus needed for them to produce the perception of pain. This can very wildly from person to person. What I think is painful is not the same as what June thinks is painful. What June thinks is painful is not the same as what Joe Shmoe on the street thinks is painful.

Closely related is an individual’s pain tolerance. This is the amount of pain they can endure before it begins to impact their daily life. Again, this is dependent on the person. While I may think that pain rated 3 on a scale of 1 to 10 is absolutely unbearable, June may rate her pain as a 8 and not even feel bothered by it.

Breakthrough pain is a form of pain that occurs even when you’re taking scheduled pain medications. It usually occurs in patients who are experience chronic pain.

With pain medication comes the problem of medication tolerance. Gradually, as you take pain medication, your body grows more tolerant of the medication, and higher doses need to be administered to get the same effect. This isn’t really a bad thing or a good thing, it just means the nurse needs to recognize the need for more medication.

A common fear that patients have when they’re taking pain medication is dependence. Now, there are two types of dependence: Physical and physiological dependence. With physical dependence, your body has adapted to the presence of a drug and depends on its effect to function. If you were to suddenly take them off of their pain meds, they would go through severe withdrawal. Physiological dependence is compulsive drug use not due to pain relieving effect, but the high it gives you. This is what many people refer to when they express fears of gaining an addiction to pain meds.

Contrary to popular belief, addiction to pain medications is rare if it is being taken as it was intended. If you’re in pain and need the medication to function, you’re not addicted to it. You’re just in pain.

Now, let’s talk about some physiology!

When your body receives an injury, a chemical called prostoglandin is formed. Prostoglandin assists in regulating many different bodily functions, but are primarily used in the body’s inflammatory response. COX enzymes assist in the formation of prostoglandin, and are generally divided into two creatively named categories:

  • COX1: Enzymes that have a protective effect on the kidneys and stomach. If these enzymes are inhibited, an individual will experience gastric irritation, ulcers, and bleeding.
  • COX2: Enzymes that cause pain and inflammation. Inhibiting these enzymes reliefs pain and inflammation.

Most drugs work by inhibiting both of these enzymes, causing analgesia (pain relief) and decreased inflammation as well as gastric irritation. However, there are some that can selectively inhibit COX2 and avoid inhibiting the protective effects of COX1. Regardless, long term use of medication that inhibit either enzyme may result in gastrointestinal problems.

Now, let’s talk about some specific disease processes. Ever hear of something called gout? I’m sure many of you are familiar with it, or may be experiencing it yourself. 

Gout is a condition in which uric acid builds up into your joints, and eventually ends up crystallizing. This causes an inflammatory response because believe it or not, your body doesn’t particularly care much for crystals being in its joints. Gout is extremely painful, and occurs rapidly. For some reason, it occurs most frequently in the big toe.

Treatment for gout is generally just using medications or special diets to control your body’s levels of uric acid. The best way to decrease your risk for developing gout, avoid eating excess amounts of red meat, sea food, and alcohol. I know, it sucks, but want to know what else sucks? Crystals. In your joints. 

As we reach the home stretch for pain and inflammation concepts, we need to talk about one more thing: Headaches. If I were a guessing man, I’d say that headaches are probably the primary reason you use pain meds. 

There are three main types of headaches to consider. The first is a cluster headache. This is a type of headache that takes place on one side of your head. It causes extremely severe pain, and tends to manifest itself with other symptoms on the same side, such as tearing up, ptosis (drooping eye), and stuffy nose.

The next kind of headache, and the most common, are tension type headaches. These cause mild to moderate pain on both sides of the head, and may feel like a tight band has been wrapped around your head. 

Lastly, we have the dreaded migraine. Migraines are caused by a multitude of environmental and genetic factors. They start when your intracranial blood vessels start to dilate. Usually they take place on one side of the head with moderate to severe pain. Usually, migraines are accompanied by other symptoms such as nausea, vomiting, photo sensitivity, and sometimes numbness and tingling on one side of the face as well as speech difficulties.

Alright! We did it! We now know enough about pain and inflammation to have some form of understanding regarding how things work. Time to get into the real nitty gritty: The medications.

Pain and Inflammation: Medications

Acetylsalicylic Acid: Or, as most of you probably know it by, aspirin. The primary use for this medication is headache, fever, and myocardial infarction (a fancy word for heart attacks.) It works by decreasing platelet aggregation in your blood, which is a way of saying it makes your blood clot less. It does this by inhibiting prostoglandins

There are two primary adverse effects of acetylsalicylic acid. The first is that since your blood clots less, you bleed a lot more. The second is nephrotoxicity, otherwise known as poisoning your kidneys. You’ll see this one a lot so keep it in mind.

Toxicity is a serious concern when taking aspirin. Considering a large amount of people take aspirin for everything, it’s easy to accumulate a lot of it in your system. In children, any more than 4g of this stuff taken in one day is enough for them to experience an overdose. For adults, it takes more than 20-25g. Symptoms of acetylsalicylic acid toxicity are seizures, tetany (muscle spasms), tinnitus (ringing in the ears), and dizziness. Acetylsalicylic acid toxicity is called salicylism.

Aspirin interacts with a number of drugs. When taken with NSAIDs (Non Steroidal Anti Inflammatory Drugs), bleeding risk is increased and the NSAIDs effect is decreased. When taken with Anti-coagulation drugs, bleeding is increased. Alcohol increases the risk for toxicity. When taken with insulin, blood sugar is decreased.

Lastly, aspirin has a Black Box Warning for children under the age of 6 due to a risk for swelling of the brain, a condition called Reye’s Syndrome.

Acetaminophen: Bet you’ve all heard of this guy before. Acetaminophen is used for pain and fever, but has no anti-inflammatory effect. Like aspirin, it works by inhibiting prostoglandins.

Again, like aspirin, taking acetaminophen is nephrotoxic. It is also hepatoxic, which means poisoning of the liver. This is another side effect you’ll see frequently because so, so many drugs are metabolized in the liver.

Acetaminophen is the absolute safest pain medication for children and pregnant women. Because of this, it is in everything. That means like aspirin, it’s really easy to overdose. The max daily dosage is 4g.

You know how everyone says not to take medications with alcohol? Yeah. There’s a reason for that, and it’s not because it boosts the effectiveness of the drug. Alcohol is very damaging to the liver. If you take a medication like acetaminophen or aspirin with alcohol, it has an additive effect and your lever is damaged even more. You like your liver? Don’t take medications with alcohol!

The other noteworthy drug interaction acetaminophen has is that it increases the effect of seizure medications.

And that’s about all there is to say about acetaminophen. All in all, a solid drug.

Ibuprofen: We’re on a roll here with familiar medications, aren’t we? Ibuprofen is used for fever, pain, and dysmonorrhea (menstrual cramps). In high doses, it also has an anti-inflammatory effect. Ibuprofen is one of those drugs I mentioned earlier that inhibits both COX1 and COX2. Can you guess what that means?

I’m assuming you guessed “Gastrointestinal upset is a side effect”, because I believe you’re a smart individual.Because of that, it is recommended to take ibuprofen with food. It is also nephrotoxic, and has a risk for gastrointestinal bleeding.

Ibuprofen is absolutely 100% NOT safe for pregnant women! It can cause a condition called Patent Ductus Arteriosus. The ductus arteriousus is an opening between two major blood vessels that bypass the baby’s lungs when it is still in the womb. If it is born and the vessel is still patent, a term meaning open, the baby’s blood will not get properly oxygenated because the blood is going around the lungs. Please don’t take ibuprofen if you’re pregnant!

Ibuprofen also has a Black Box Warning for use for treatment of perioperative pain after a coronary artery bypass surgery. Which is super specific, but this is the way of black box warnings. 

Like the other drugs mentioned, ibuprofen has an increased risk for bleeding when taken with other anticoagulants, and alcohol increases the risk for toxicity. See a trend?

Meloxicam: Now we have an anti-inflammatory drug! Meloxicam is a NSAID (remember what that stands for?) and is used for the treatment of osteoarthritis and rheumatoid arthritis.

Side effects for meloxicam are shortness of blood, hemoptysis (coughing blood), and bronchospasms. Because of this, it is not recommended for asthma patients. Additionally, the drug is pregnancy category D.

Meloxicam has a Black Box Warning for cardiovascular events.

This drug boosts the effects of lithium supplements and increases its concentration in the body. Lithium has a very tiny therapeutic range, so there is a high risk of toxicity here. As expected, don’t take it with alcohol.

Endomethacin: Another anti-inflamatory drug. It’s used in for gout attacks as well as rheumatoid and osteoarthritis. This is another drug that works by inhibiting both COX1 and COX2.

Side effects of note with endomethacin are bleeding, and if given via an IV, pulmonary hemorrhage.

Don’t take this drug if you’ve got any gastrointestinal bleeding or have significantly renal impairment

Celexacib: What’s with all the anti-inflammatory medications all the sudden? Like meloxicam and endomethacin, celexacib is for rheumatoid arthritis, osteoarthritis, and this time juvenile arthritis. Unlike those other two meds, however, this one is a selective COX2 inhibitor. That being said, with prolonged use it can still cause gastrointestinal upset.

If you’re allergic to NSAIDs, or ASAs, you shouldn’t take this drug. You also shouldn’t take it if you have any renal impairment.

Celexacib has a Black Box Warning for cardiovascular events, much like meloxicam.

This drug has an 11 hour half life due to it being highly protein bound. Just an interesting little tidbit.

Now let’s talk some gout meds!

Allopurinol: This is the most basic of all gout medications. All it does is decrease the amount of uric acid in the body. It has no analgesic or anti-ineffective effect. It is purely a prophylactic drug, meaning it’s a medication that is taken as a preventative.

It can cause pruritus (itching), rashes, hepatoxicity, and renal toxicity.

When taken with warfarin, risk of bleeding increases. When taken with any ACE inhibitor, it increases the blood pressure lowering effect.

It’s important to increase your fluid consumption when you’re on this medication. Uric acid is excreted through the kidneys, so you need a lot of fluid to increase urine output. A yearly eye exam is also recommended.

Colchicine: In contrast to allopurinol, colchicine is used in acute gout attacks, though it also functions as a surgical gout prophylaxis. It works by inhibiting leukocytes from reaching the injury site, preventing the inflammatory response.

Some side effects for this medication are paralytic illeus, euplastic anemia, bone marrow suppression, and vasoconstriction.

As this medication is for acute gout attacks, it needs to be taken at the first sign of gout to work properly.

Morphine: Moving out of gout meds and back to pain meds, say hello to the big dog himself. Morphine is a pretty crazy drug. As an opioid, it doesnt particularly work by inhibiting any sort of inflammatory effect. Rather, it simply blocks your brain’s perception of pain. It doesn’t fix your pain per say, it just makes you forget it’s there.

Morphine is a central nervous system depressant. That means it knocks you out real good. It can also cause hypotension.

Because it has such a powerful effect on the CNS, you cannot give morphine to a patient who is sedated or suffering from respiratory depression. If you give morphine to a patient who is already breathing very little, it will just make them stop breathing completely.

Don’t take morphine if you have increased cranial pressure or if your pregnant. If the mother gets morphine, the baby gets it too.

It goes without saying, but don’t take this medication with alcohol. Remember what I was saying about respiratory depression? If you take this with alcohol, that can also happen to you. If you like breathing, I recommend staying away from alcohol while on morphine.

Morphine is the drug of choice for chest pain. It decreases the heart’s oxygen demand, decreases the heart’s workload, and causes vasodilation. All of that stuff is very welcome for a troubled heart.
While morphine tends to be the opioid of choice in a lot of situations, there are many others that are used in the hospital setting. 

Now! Rapid fire Opiods!

Codeine is used for mild pain relief and as a antitussive (cough suppressor). It has minimal risk of being abused. 

Fentanyl is a transdermal pain medication, which means it’s given as a patch. It’s a crazy strong pain reliever, but it takes up to 6 hours for it to be taken into effect. Needless to say, it’s best suited for continuous pain control rather than dealing with acute pain. On the upside, one patch lasts up to 3 days! 

Methadone is a cheap, long lasting pain medication that’s given primarily for chronic pain or to those going through withdrawal. 

Lortab is a combination med made out of acetaminophen, hydrocodone, and oxycodone. It is the #1 most abused drug in Alabama. If abused, patient can go into severe withdrawal if not properly weaned off. 

Lastly, Hydromophone is a pain medication that is given in small doses. It works very well in patients who don’t respond to morphine.

An important drug to consider when administering opioids is Nalaxone. You know how opioids can cause severe respiratory depression? Nalaxone is the drug you give to bring them back. It completely reverses the effect of opioids, raising heart rate, blood pressure, and respiratory rate. 

Of course, it also reverses the analgesic effect. During the 1-2 hours this medication’s effect lasts, no further pain meds can be administered. So yes, nalaxone will make the patient breath better, but it’ll also make them hurt. Try to only use it when absolutely necessary.

Finally, let’s talk about some headache mess!

Acetaminophen/Aspirin/Caffeine: This is a combination drug for headache relief. It increases your capillary permeability and causes vasoconstriction. This increases both your heart rate and blood pressure. It’s not recommended for diabetics, as vasoconstriction decreases blood flow.

Sumatriptan: A hardcore headache medication. It provides relief to migraines only. It works by stimulating S-HT1b/d receptors. What’s that mean? Good question folks.

Side effects for sumatriptan include chest pain and coronary artery spasms. Don’t take it if you have any cardiovascular disease, have a history of hypertension, or smoke. In pregnant women, the benefits of the medication outweigh the risks.

SSRI medications increase the effect of sumatriptan. Concurrent use of MAOIs may cause serotonin syndrome.

It’s important to note that Sumatriptan is an abortive therapy. That means that you take it the moment you start feeling a migraine come on.

Ergotamine: Take a deep breath- this will be the last medication we’ll cover. Once again, we’ve got ourselves a headache relief drug here. 

It works by affecting the seratonergic, dopaminergic, and alpha adenergic receptors. This medication is given sublingually, which means under the tongue. It also stimulates smooth muscle. 

Because of this, it has a Black Box Warning in pregnant women and is a category X drug. It’s not recommended for individuals with cardiovascular disease, children, or renal disease.

In Conclusion!

Wow. That’s a lot of medications. Enough to make you lose your mind even. I know my brain is fried, and that’s only half of the stuff I have to know for my test!

I debated whacking you guys in the face with some “Infection Medication Knowledge” ™, but considering this is already about 16 pages long, I’ll spare anyone that was crazy enough to read this. Perhaps another time!

And with that, I’m going to sleep y’all. It’s 5am and I gotta leave for class at 7am. Some of you will probably act surprised by this, but those of you in college are probably just rolling your eyes and saying “Pfft. Lightweight.”

Until next time, peace!

Plant Growth

Hello, Internet! It’s been an awfully long time, hasn’t it? It sure feels like it, to me. I’ve gotten my first physical chemistry and my last biochemistry ([sobs]) under my belt since we met last, and I’m back with a vengeance to take down the last of my undergraduate coursework. Or, you know, to be taken down by my coursework, kicking and screaming.

(I’m not kidding. With the way Physical Chem II is going, it feels an awful lot like it’s going to be mostly screaming. And crying. Lots of crying.)

I’m taking a break from feeling hopelessly, hopelessly lost to study something makes me feel hopeful—plant biology! (Yes, you read that right. I’m taking plant biology, much to my roommate’s exasperation, because I’m not planning on being a doctor, and plants are freakin’ cool, okay?)

Alright! Now that all the formalities are out of the way, let’s get down and dirty (hah) with a lil’ post on the growth of plants.

Doesn’t that sound incredible?

Plants are, in all honesty, really, really weird. I often jokingly tell Sapphire, when she accuses me of treason for taking this class, that plants are like aliens among us, and I have every intention of being on the right side of the next alien uprising. I’ll stand by that statement here. Plants are awesome, and I welcome our tree overlords with open arms.

One of the things that seems to captivate the human imagination when it comes to plants is the way that they just… grow in the way that they do. You stick this tiny, shrivelled, potentially old, maybe dethawed seed in the dirt, and you’ll soon have a little green plant poking out of it. The stem will always grow up, the roots will always grow down, and, provided that your selected spot has enough water, light and nitrogen, the plant as a whole will be perfectly happy there for the rest of its life.

“So,” you ask, “how does a plant do its thing?”

“I mean,” you continue, “I kind of already know, but I’m assuming you have more to say on the topic, and I’m already here, so I might as well listen.”

(I apologize if I do a lot of writing in the second person—I’ve been binge-listening to Welcome to Night Vale.)

Well, if you’re going to talk about growth in an organism, the first thing you have to do is talk about growth on the smallest scale—the cellular scale. In order for anything to grow, the fundamental units from which it is composed (cells, in case you missed the memo) must replicate themselves. You’ll already be familiar with the term for cell division—yup, we’re talking about mitosis!

As you probably already know, the general gist of mitosis is that a cell duplicates its genome, distributes that information into two new nuclei, and then splits itself apart (a technically separate process called “cytokinesis”).  Plant cells undergo mitosis this way, too, but plant cells are also… special.

You see, plant cells have some organelles that animal cells don’t, the most prominent of which (other than, you know, chloroplasts) is the central vacuole. This is an organelle that modulates the internal pressure of plant cells, and it’s also the dumping site for enzymes and anything the cell deems dangerous. (You like onions, right? Because you’re weird? The compound that makes you cry when you cut onions is syn-Propanethial-S-oxide, and it’s made from enzymes that are released when you bust the central vacuoles of onion cells. Serves you right. Hah.)

The central vacuole is great, but not when you’re a humble nucleus tending to cell division. That’s where the phragmosome comes in. Strands of cytoplasm (aptly called “cytoplasmic strands”) slice the vacuole into smaller pieces, and actin filaments drag the nucleus to the center of the cell. This turns into a sheet of cytoplasm that marks the plane of cell division. During cytokinesis, the phragmosome turns into a phragmoplast, which “serves as a scaffold for cell plate assembly.” The cell plate, our final plant-specific structure, is where the plant cell’s rigid cell wall forms.

It’s also worth noting, while we’re at it, that plant cells are capable of forming secondary cell walls, which essentially involves adding thickness to the inside of their primary cell walls in three layers. These become so thick that, usually, the cell traps itself, and it dies. (Don’t worry—this is rather the point.)

Okay, so, now that we’ve talked about cell growth, we can talk about mature plant growth, right? Wrong, dearest reader. There’s a very important link between cell division and a mature plant. What do we call that?

Yup. We gon’ talk about plant embryology.

(“Embryology?” Yep. What’d you think was in that seed?)

Shh, calm down, it’s not as hard as it sounds. I know everyone loathes plant reproduction, but it’s okay! We’re going to start with a zygote, so we don’t have to deal with double fertilization. Are you chilled out now…?

Okay, good. So, if you want to make an adult plant, you have to start from a baby plant. If you want a baby plant, you have to start with a zygote. This is not so unfamiliar—animals, including humans, do this mess too. What’s different is the stage at which polarity (a difference in ends of the embryo). In humans, you can split cells off of an embryo until the blastula stage, and you’re good to go. (That’s how you get identical twins.) In plants, we’re not so lucky—the different ends of the embryo are different within minutes of fertilization.

The first time the cell divides, it produces two inequivalent daughter cells: a terminal cell and a basal cell. The basal cell develops into the suspensor, which anchors the embryo and feeds it nutrients from the endosperm. (This is not an embryonic root. However, the top part of it, called the “hypophysis,” does become part of the root cap.) The terminal cell develops into the embryo.

As the embryo develops, it passes through several “adorably-named” stages. A globular embryo already has layers of cells with different fates. A heart-stage embryo (and yes, it’s heart-shaped) has bilateral symmetry. A torpedo-stage embryo has cotelydons (two, if it’s a dictot) and all the right meristems in all the right places.

Okay! Now that we’ve taken a detour to talk about embryogenesis, we can talk about plant growth. Don’t get too hype, though—it’s not as weird as it might seem.

Basically, there are two kinds of growth in plants—primary and secondary. Primary growth is growth “up”—length in the stems and roots, branching, leaf-, flower-, and fruit-production. Secondary growth is growth “out”—think of trees getting wider by the year. (You may have heard this in the form of a joke from a biology teacher, as in, “I no longer undergo primary growth, but I sure do undergo secondary growth!”)

The hotspots for plant growth are the meristems, and they come in two varieties (unless you’re grass). Apical meristems contribute to primary growth, forming protoderm (pre-dermal tissue), ground meristems (pre-ground tissue) and procambium (pre-primary-vascular tissue). Lateral meristems, on the other hand, contribute to secondary growth, in the form of vascular meristems (pre-secondary-vascular tissue) and cork cambium (pre-periderm).

(If you’re a grass, you have intercalary meristems, which are found at the internodes of grasses.)

Each plant has two apical meristems: a shoot apical meristem, and a root apical meristem. The shoot apical meristem is a tiny, sensitive part of the plant, and at its center is a central zone, which is essentially a pool of indefinitely undifferentiated cells undergoing division. Some of the cells produced here get pushed out into the peripheral zone to become lateral organs, or into the rib meristem, to form stem. Further down, vascular tissue also develops.

The same is true for roots (division at the tip, elongation further down, and, finally, cell differentiation), but the root meristem also gets a bit of extra technology. You see, while the shoot apical meristem is up at the top of the plant, shielded from the world by leaves and living the cushy life, the root meristem (the most sensitive part, mind you) is literally driving itself head-first into the dirt. To keep the root meristem from smashing itself to smithereens, the plant equips it with a root cap, which protects it and helps it push through the dirt.

That’s all fine and dandy, but what about secondary growth? What about the lateral meristems? (“I NEED ANSWERS,” you might be growling.)

Well, the most important example of secondary growth is in the formation of secondary xylem and phloem. You might think you’re unfamiliar with these things (because I haven’t written the tissues post yet!), but you’re, like, so totally not. What happens when you add thickness to, say, one of our tree overlords? Yup. You get wood.

Secondary xylem accumulates on the interior of a plant, forming in a ring every year that the plant grows. (Xylem is deader than bread, and is basically a network of tubes that pipes water up from the roots.) This is what forms wood, with its rings that can be used to determine a hecka ton about the life of a tree. Phloem, in the meantime, accumulates on the outside, and only the newest of it is useful. (It’s still alive, and it shuttles sugar and water up and down. Where do you think we get sap?)

Whale, that’s it for plant growth! It’s not that hard, right? (“No, June! It’s really easy, and also I’m glad you’re alive!” Aww, thanks, reader.) Now we can move on to more specific things—things like, you know. Leaves.

Wanna fangirl with me, since my roommate clearly doesn’t? Hit me up, fam.


Hey, all you Goobers! Hope you’re having a good evening! Today, on the eve of my biochem test, I will be bringing you many posts about things that you probably don’t care about. First on the list is something that’s actually really important: the process of translation!

In yesterday’s post, I talked a bit about how mRNA manages the impressive task of coding for very specific proteins. However, I mentioned, in that post, that there was more to it than just that—there’s a whole process we have to take into account. That process, usually lumped together with transcription in high school biology books, is called translation.

You see, cells have these things called ribosomes. These things make proteins. In prokaryotes, they’re made of two different subunits: a 50S subunit and a 30S subunit. The ribosome is composed almost entirely of rRNA, with a few proteins tossed in that, interestingly enough, aren’t actually required for it to do its thing. (In other words, we’re talking about an RNA enzyme here!)

Ribosomes, unsurprisingly, have a very specific shape. Notably, each ribosome has, among a lot of other features, a decoding center (where decoding occurs) and a peptidyl transferase (peptide-transferring) catalytic center. Even more notably, although all organisms have ribosomes that are very structurally similar, sequence homology isn’t really a thing. (In other words, our ribosomes all look the same, but they don’t have the same RNA sequences.)

So, we’ve got ribosomes, we’ve got a transcript, and we’ve got charged tRNAs. Seems like we’ve got all the right ingredients, but how do we mix them up? Well, as with just about everything else that we’ve looked at so far, translation is a process with three steps: initiation, elongation, and termination. Let’s look at them in prokaryotes first, shall we?

In prokaryotes, initiation requires a special kind of tRNA called f-Met-tRNA. This holds a  formylated methionine amino acid, which looks kind of like normal methionine with a “peptidy” group sticking off of the end. The tRNA associated with this amino acid recognizes the start codon for a sequence, seeking out AUG (or GUG or UUG).

However, before we can bind to mRNA, we have to have mRNA in the right position on the ribosome. This occurs by the binding of a pyrimidine-rich sequence on the rRNA to a purine-rich sequence on the mRNA. This sequence, called the Shine-Dalgarno sequence, puts mRNA in the right place to be translated.

Now that we’ve got everything situated, here’s how initiation goes down: a 30S subunit with Initiation Factors 1 and 3 (IF-1 and IF-3) bound to it binds a complex of IF-2, f-Met-tRNA, and GTP. The f-Met-tRNA finds its codon, and GTP is hydrolyzed in a step that is characterized by the release of IF 1 and 2 and the binding of the 50S subunit.

Once we’ve got that down, we can start elongating our protein. This involves the ribosome moving (“translocating”) along the transcript while tRNAs move from one “site” of the ribosome to the next. The ribosome has three sites: the A site, where charged tRNAs are accepted, the P site, where the tRNA holding the peptide chain resides, and the E site, where spent tRNAs exit.

First, EF (Elongation Factor)-Tu binds a tRNA and GTP. This then enters the A site. The GTP is hydrolyzed, causing a conformational change that puts the aminoacyl part of the charged tRNA in the right spot in the peptidyl transferase site.

The actual formation of the peptide bond between the incoming amino acid and the existing peptide chain actually takes no energy input. As it is, the PTC (peptidyl transferase center) is just there to make sure that everything’s situated properly in order for the chemistry to occur.

Now an EF-G:GTP complex binds to the ribosome. By hydrolyzing its GTP, EF-G causes a conformational change that results in movement of the ribosome. The tRNA in the A site, which contains the peptide with its new amino acid, is moved to the P site. The “empty” tRNA in the P site is moved into the E site. The ribosome is lined up to another codon. Everyone’s happy.

Termination of the whole mess employs the use of release factors. Once the protein has been completely decoded, the end of the transcript will be reached. This is marked by a “stop” codon, a nonsense codon which doesn’t code for a tRNA. Release factors bind, creating a 70S ribosome:RF-1/RF-2:RF-3:GTP complex. The peptidyl transferase hydrolyzes the peptide chain from the last tRNA, and the release factors are ejected. Ribosome Recycling Factor (RRF) pulls the resulting mess apart, disassembling the ribosome and the mRNA and tRNA left inside.

That’s not too difficult, but what if you’re a eukaryote? Painful life experience has taught us that eukaryotes rarely do things with the same simplicity that prokaryotes. So, how does eukaryotic translation differ from prokaryotic?

Well, actually, the differences aren’t too staggering. In initiation, some eukaryotic initiation factors (eIFs) bind to the ribosomal subunit before mRNA, providing a scaffolding for the mRNA and its accompanying proteins. The initial Met-tRNA (not formylated) binds before the mRNA. Then mRNA is bound to the subunit through the help of the eIF4 group (containing a “cap-binding protein”) and (poly)A-binding protein (PABP). Scanning then occurs, wherein the ribosome finds the start codon on the mRNA. GTP is hydrolyzed, and the IFs are then ejected.

Elongation is pretty similar, excepting the use of eEF1 and eEF2 in the place of EF-Tu and EF-G. Then, when a codon is reached, a single release factor (!!), with GTP attached, binds to the ribosome. The GTP is hydrolyzed, the peptide is released from its tRNA, and everything pretty much goes to pieces.

All right! So far, so good! We’re making a good amount of sense, right? Well, lucky you, it only makes more sense from here on out. Now that we’ve made our proteins, we have to do stuff with them, right? Well, before we can figure out how proteins work, we have to know their structures. Sounds like fun, right?

Brb, physics lab is a thing that exists.


The Genetic Code

Good evening, all! I hope you’ve had a decent month since I last checked in. I know I promised several posts on various specific aspects of transcription, but to be perfectly honest, they ended up sounding disjointed and uninteresting. (I was really freakin’ tired when I wrote them, so…) Perhaps I’ll go back and edit them, but in the meantime, I’ve got something more interesting to offer you: translation! Doesn’t that sound like fun?

Cells put a lot of effort into making RNA transcripts. That much is apparent, if only from our past study; the process is long, complicated, and very tightly controlled at more levels than we, as students, really want to thing about. The reason for this is one that we’ve known since we were in high school: cells take these mRNA transcripts and turn them into protiens. (That’s, you know, what makes DNA a so-called “blueprint.”)

That’s great and all, but how do we get from Point A to Point B? Somehow, we have to take a seemingly random sequence of nucleotides and turn it into a very specific sequence of amino acids. Obviously, there has to be some kind of go-between, something that bridges the gap. But what could that be?

Enter tRNA! In the past, we talked about tRNA, and how aminoacyl-tRNA synthetases love to attach amino acids to them (a process called “charging”). There’s one vital piece of the puzzle: we’re linking an amino acid to something made of nucleic acid. So far, so good.

The next logical step is to suppose that, if the amino acids attached to each tRNA are specific for that tRNA, there has to be a way that the protein-building machinery of our cells (ribosomes) can tie that tRNA to a bit of genetic information. This is accomplished in a very simple and elegant way, through the use of a genetic code of triplets.

In mRNA, each sequence of three bases (a “triplet”) constitutes a piece of information that codes for one amino acid. This bit of information is called a codon. Codons correspond to complementary sequences, called anticodons, on the tRNA molecules attached to their specific amino acids.

Codons are arranged on an mRNA molecule in a non-overlapping way, and their code isn’t punctuated. The sequence is read from 5′ to 3′ direction, one triplet after another, according to the reading frame of the molecule. Therefore, if you shift the machinery’s reading frame a base forward or backward, you end up completely messing up the code.

Each codon has a meaning, and all except for three (the three “nonsense,” or “stop” codons) code for an amino acid. Most amino acids (excepting Met and Trp) are coded for by multiple codons, but multiple amino acids are not coded for by the same codon. (This is called degeneracy.) Because of this, changes to bases in the genetic code don’t always alter the protein—as long as the altered codon is still one of the codons that corresponds to the amino acid specified by the original sequence, there’s no harm to the protein itself.

Additionally, amino acids with similar properties are coded for by similar codons. In other words, even if you manage to turn a codon for A into a codon for B, chances are that B will do about the same thing in the protein as A would have. This is another protection against harmful mutations.

So, I probably know what you’re thinking right about now. “Wait just a darn minute. You just said each codon specifies an anticodon through complementary base pairing. How the heck can multiple codons code for the same amino acid, then? Does each specific form of an amino acid’s codon only get the specific anticodon that it’s complementary to? If so, that’s dumb. What’s the point in that, besides making things more complicated?”

I’m gonna stop you right there, because the answer is simpler than you think. The truth is that one codon can, in fact, code for multiple anticodons! That’s pretty thrifty, if you think about it, but it also probably makes you feel cringy. After all, didn’t we just put a lot of effort into keeping this code as pristine and unaltered as possible? Does it even matter anymore? [cue existential crisis]

Well, it does matter, because even the not-quite-right pairing follows specific rules. More specifically, a codon can only code for anticodons that have a different base in their third position (incidentally, this is where all synonymous codons differ). This position, the position of “meh, whatever,” is called the “wobble position.” If you’ve got a U in this position, the protein-making machinery can squeeze either an A or a G in there. If you’ve got G, you can get C or U. If you’ve got I, heaven help you—U, C, and A are free game.

That being said, certain organisms favor certain codons for an amino acid over others. This “codon bias” accounts for the different base compositions of different genomes. For example, while E. coli and humans both need to code for proline, E. coli’s favorite codon for this is CCG, while ours is CCC.

All right, so we’ve basically pieced together how our cells piece together our proteins, but we’re still missing something vital. How the heck does this actually happen? Well, dear reader, that’s the subject of another blog post—come along with me, and we’ll enter the mystical, magical world of translation.

Questions? Comments? Want to rant and rave about how wonderful biology is? I hear you!