CBIO (CSCI) 4835/6835: Introduction to Computational Biology
Even though we're "officially" moving off modeling, we're still using similar techniques to study protein structure dynamics--how they fold, how they interact, how they bind, how they function. Proteins are enormously complex and modeling them requires answering a range of questions. By the end of this lecture, you should be able to
Not so dissimilar from computational modeling--basically a different side of the same coin.
Definition 1: Development and application of theoretical, computational, and mathematical models and methods to understand the biomolecular basis and mechanisms of biomolecular functions.
Definition 2: Understanding the fundamental physical principles that control the structure, dynamics, thermodynamics, and kinetics of biomolecular systems.
Basically, modeling of molecular structure.
As I'm sure everyone is aware: both computing technology and available storage have grown exponentially over the last few decades. This has enabled us to stockpile structure data and analyze it.
If you haven't, I encourage you to check out this website.
We'll come back to this (specifically, the eponymous PDB structure file).
Of course, obtaining the structure of a molecule is a challenge (i.e. X-ray crystallography), but what we're discussing here is what we do after obtaining the structure information.
Note from the previous definitions a common thread: dynamics, kinetics, and thermodynamics. We're interested in how structure changes--i.e., a notion of time.
There's a trade-off in terms of resolution and speed.
What are the minimal ingredients of a simplified, but physically meaningful, model of structure?
First, we need to go over some preliminary background regarding protein structure before we get into how to model it.
As you all know, there is a hierarchy in protein structure.
This hierarchy is where structural variability--and ultimately, function--comes from.
Of course, polypeptide chains have potentially enormous variability (recall Levinthal's paradox).
There are "only" 20 amino acids, but accounting for their possible combinations, in addition to bond lengths, bond angles, and side chains exponentially increases the number of variables to consider.
We need a way of representing proteins from a modeling standpoint.
One way to represent a protein is to use our old friend and mainstay, Cartesian coordinates.
Even better than Cartesian coordinates is generalized coordinates: we throw out absolute XYZ space for a representation of the polypeptide in an arbitrary location.
Instead of recording absolute spatial coordinates of atoms, we record bonds, bond angles, and dihedral angles.
The concept of degrees of freedom is an important one: this is basically the number of parameters that make up the system or model you're examining. Put another way, it's the number of "knobs" you can turn to get a unique configuration.
In generalized coordinates, how many degrees of freedom are there? Put another way, how many values do we need to specify, assuming $N$ backbone atoms, to fully characterize the 3D structure of a polypeptide?
Total: $3N - 6$ (+ 6 external degrees of freedom: 3 translation, 3 rotation)
Of all the $3N - 6$ parameters in a generalized coordinate system, bond rotations (i.e. changes in dihedral angles) are the softest, and are primarily responsible for the protein's functional motions.
These torsional bonds have names: the $\phi$, $\psi$, and $\omega$ angles, though any angle with the $\phi$ prefix is understood to be a dihedral angle.
A quick primer on converting from absolute (Cartesian) coordinates to the more algorithmically-favorable generalized coordinates.
(the bond vector $l_i$ points from atom $i - 1$ to atom $i$; hence, we start with bond $l_2$)
Let's say you have a molecule with 4 atoms, $R_1$ through $R_4$ that each have their own $(x, y, z)$ coordinates.
where $n_k$ is the unit vector that is perpendicular to the plane created by $l_k$ and $l_{k + 1}$. You can compute this vector by computing $n_k = \frac{l_k \times l_{k + 1}}{| l_k \times l_{k + 1} |}$
Given these parameters, how many possible different conformations are there for a given $N$-atom protein?
Infinitely many.
Or, in discrete space, a whole freakin' lot.
Luckily...
Ramachandran plot for 4 residue types:
Upshot: even though there potentially infinitely-many conformations, real-life proteins seem to inhabit a very small space of conformational possibilities.
At this point, we understand how to describe a polypeptide in space. Now we want to see what it does over time in some kind of environment.
This addition of time generates what is called a trajectory of the polypeptide, giving us insight into its function.
For a single residue $R_1$ (not to be confused with our definition of $R_1$ as the 3D Cartesian coordinates of an atom previously in this lecture), its trajectory looks something like this:
where $t_1$ through $t_{1000}$ are all the time points at which we evaluate the structure.
We're basically solving $F = ma$ for every atom in a residue at each time point; done over enough time points, we can start to see larger, coordinated behavior emerge.
Of course, that's just for 1 residue. A protein usually has quite a few residues, in which case the trajectory would look something like this:
From this information, we can compute a few critical quantities.
The average position vector for $R_1$ is written as $ < R_1 > $. This is simply the average over all time points $t$.
From this quantity, we can compute the instantaneous fluctuation vector $\Delta R_1(t_i)$ at a specific time point $t_i$: $R_1(t_i) - < R_1 > $
Cross-correlation between fluctuation vectors of residues $R_i$ and $R_j$: $<\Delta R_i \cdot \Delta R_j> = \frac{1}{m} \sum_{k = 1}^m \Delta R_i (t_k) \cdot \Delta R_j (t_k)$
When $i = j$, this reduces to mean-square fluctuation.
If we compute cross-correlations for all pairs of residues in the protein, this gives us the covariance matrix.
As the name implies, this matrix quantifies how each residue in a protein co-varies with each other residue over the duration of the trajectory.
The covariance matrix is a crucial component to understanding long-term dynamics of a protein.
After all, we're not necessarily interested in what happens between individual femtoseconds, but we need enough femtoseconds to be able to observe cooperative, global behavior in the protein.
But how do we uncover these dynamics from a covariance matrix?
Anyone remember Principal Components Analysis (PCA)?
PCA is the most familiar of a family of analysis methods, collectively called dimensionality reduction techniques.
You can think of PCA (and really, any dimensionality reduction technique) as a sort of compression algorithm: condensing your existing data into a more compact representation, in theory improving interpretability of the data and discarding noise while maintaining the overall original structure.
Each algorithm goes about this process a little differently. PCA is all above maximizing variance.
Given some data, PCA tries to find new axes for the data. These axes are chosen to "maximize the variance in the data"--a complicated way of saying that it draws new coordinate axes to capture the directions in which the data move the most.
Once you have these "principal components", you can then choose to keep only the top handful--theoretically maintaining the majority of the variability in the data, but with considerably fewer data points.
This is precisely how we want to analyze our trajectory data. After all:
We can rewrite the form of the covariance matrix $C$ as $C = U \Lambda U^{-1}$
Cool. What do these mean?
Let's reorganize the equation a bit to gain some intuition about these quantities.
We have an exact correspondence: $\Delta R = Uq$
u[k]
) corresponds to the motion of the $k^{th}$ residue!)Armed with this information, we can start to make some inferences about how the entire protein behaves!
Side, but important, note: the eigenvalues (and corresponding eigenvectors) are sorted in descending order in $\Lambda$. Therefore, the first eigenvalue (and its corresponding eigenvector) will represent the lowest-frequency motion, or the largest fluctuation amplitude.
Aka, the part of the protein that moves the most.
You can use the top handful of principal components to see exactly how the protein moves:
This all sounds well and good. Problem is, computing eigenvalues and eigenvectors ain't cheap.
We've discussed how $N$ (the number of residues) and $m$ (the number of snapshots) can both get extremely large.
Unless you get into highly sophisticated computing environments, your average SciPy eigen-solver (yep, SciPy has an eigensolver!) runs in $O(n^3)$.
That's fine when $n$ is small, not so fine when $n$ is in the hundreds of thousands or millions.
Think of SVD as a sort of half-sibling to PCA. In the very specific circumstance where you want to compute a covariance matrix, SVD short-circuits that process and can give you a little performance boost.
Now we don't have to compute the full covariance matrix, just the trajectory matrix $A$:
Similar to PCA, SVD will break down the matrix of interest into three constituent components that have direct biological interpretations. Specifically, we represent our trajectory matrix $A$ as
Let's look at it visually. First, we have our trajectory matrix $A$ of $N$ residues over $m$ time points or snapshots:
From the SVD, one of the parameters we get back is the matrix $V^T$, the rows of which give the displacements along each principal axis (with columns indicating displacement at each time step):
Which you can then use to directly visualize the motion along the principal axes:
These techniques decompose the trajectory into a collection of modes of the protein, sorted in order of significance.
You can use these modes to reconstruct the trajectory based on the dominant modes.
These modes are robust: they are invariant and do not depend on the details of the model or force field.
These modes are also relevant to biological function: you can see what conformations your protein spends most of its time in.