Many have called Sir William Osler (1849-1910) the “Father of Modern Medicine”. He was one of the four founding professors of Johns Hopkins Hospital where he was instrumental in creating the first residency program for the specialty training of physicians. He brought medical students from the classroom to the bedside for clinical training. He shared a very profound insight with us: “Variability is the law of life and as no two faces are the same, so no two bodies are alike and no two individuals react alike and behave alike under the abnormal conditions which we know as disease.”
Clinicians have known for some time that diseases as well as the way they are treated can affect individuals differently. Tailoring diagnostic and therapeutic strategies to a patient’s individual characteristics is the field of precision medicine. Today, we are living in a time that allows to implement precision medicine in certain areas. The question is how far can we push this paradigm?
There are three “key ingredients” for precision medicine: Continue reading
Our VarSeq as a Clinical Platform webcast last week highlighted some recent updates in VarSeq that support gene panel screenings and rare variant diagnostics.
The webcast generated some good questions, and I wanted to share them with you. If the questions below spark new questions or need clarification, feel free to get in touch with us at firstname.lastname@example.org.
Question: Should dbSNP filtering be done beforehand or does VarSeq have that built in?
Answer: There is no need to complete filtering beforehand. VarSeq starts as an empty project with your variant data that has been merged from multiple samples, and then you can apply our starter templates which include a couple filters on common things. The dbSNP IDs from your incoming variant files can be set as an identifier field in VarSeq. You can also hyperlink the dbSNP IDs so they can be used as a reference. We keep up on the latest version of dbSNP, currently dbSNP 142, which you’ll see in our public annotation repository. Adding that as an annotation source allows you to be able to do things like create filters the dbSNP IDs.
Over 650 GenomeBrowse licenses have been registered and downloaded since the beginning of 2015, and with so many people enjoying the utility of this freeware program, I wanted to showcase some advanced tips and tricks so you can get more out of GenomeBrowse!
Under the Controls panel, when you’re clicked inside a data plot, there is a “Filter” tab. This filtering option allows you to filter your data to create visualizations for publication or to manually inspect your data. Here I’ll take you through how to use this function to get the most out of your data and GenomeBrowse. First we’ll look at the options for filtering a BAM file.
Under the “Filter” tab in the control panel for a BAM file, there are four options for filtering the visible read alignments. Perhaps the most helpful is the option to Filter Multi-Mapped Alignments that allow the user to specify the Mapping Quality Threshold. Below you can see an area of the genome with many reads that have not been uniquely mapped with a moderately high threshold of Q50 vs a low Q5. The cross hatching in the upper panel identifies these reads as being hidden based on this quality filter threshold and they are not visible in the individual read plot. Another option for filtering that is similar, but distinct, is the option to filter Duplicate Alignments. These are reads marked as “PCR or optical duplicate” within the BAM file.
The promise of Precision Medicine is to leverage highly targeted therapies for the benefit of the patient. By understanding better what makes us unique and leveraging our genetic make up, we hope to improve the outcome for the individual. Now, this blog is focusing on one issue that we collectively have to overcome to make precision medicine a reality. And this issue is simply: Cost.
For some time, lung cancer has turned into the poster child for precision medicine. At this point it is considered standard care for stage IV patients to identify targetable oncogenic drivers. As an example, the anaplastic lymphoma kinase (ALK) gene has emerged as an important oncogenic driver in a small population of patients with adenocarcinoma. The prescription drug crizotinib has received accelerated US Food and Drug Administration approval when used in conjunction with its companion diagnostic test to identify patients with the EML4-ALK gene rearrangement.
From a medical perspective there is no question that this treatment “moves the needle”.
As VarSeq has been evaluated and chosen by more and more clinical labs, I have come to respect how unique each lab’s analytical use cases are.
Different labs may specialize in cancer therapy management, specific hereditary disorders, focused gene panels or whole exomes. Some may expect to spend just minutes validating the analytics and the presence or absence of well-characterized variants. Others expect every case to have its unique aspects, a puzzle to unravel with as many resources as possible as guides.
Yet for all the differences, these analytic workflows share quite a few key requirements:
- Detailed logs of the actions the user performed, allowing a trail of providence for the data evidence supporting their decision-making process
- A way for the hard work of variant assessments to be globally saved and applied to all newly imported variants
- Note taking on the context and insights surrounding a patient, with annotations being pulled in for a variant, its visual evidence and genomic context and snippets of web resources
- Ways of reproducibly filtering and ranking variants by bioinformatics thresholds, public annotations and known phenotypic information about patients
On January 30, 2015, the Precision Medicine Initiative was announced by President Obama. Many in our field, researchers and clinicians alike, recognize that such a program would bring additional funding into our space to design, develop and implement new diagnostic tests that are aiding physicians in their practice of precision medicine. Here is what we know.
Led by the National Institutes of Health (NIH), the President’s initiative intends to fund research and facilitate collaborative public-private efforts to leverage advances in genomics. It would involve stakeholders in healthcare as well as the Food and Drug Administration (FDA), Health and Human Services (HHS) and the Office of the National Coordinator for Health Information Technology (ONC). It also intends to recruit expertise from multiple sectors and forge partnerships with existing research cohorts, patient groups and the private sector to capitalize on existing genomic discoveries as well as work currently under way.
As it stands the funding would get divided four ways.
Some of our customers have recently published using our SVS and VarSeq software in their studies. We wanted to share their work and congratulate everyone on their success!
This week, Dr. Jeffery Moore presented a webcast on the Molecular Sciences Made Personal. The webcast delved into Dr. Moore’s attempts to transform how they teach chemistry at the University of Illinois and demonstrated how he uses VarSeq with his students to examine exome data.
The following are the questions asked by the attendees. Please feel free to reach out to us at email@example.com if you have any other questions.
Question: How did you come about choosing Golden Helix over the other packages available?
Over the last year our blog has seen a boom in visits and of course, I became curious. What brings people to “Our 2 SNPs…”? So, I decided to take a look at the blog posts that our community find the most intriguing. Here are my findings:
- Comparing BEAGLE, IMPUTE2, and Minimac Imputation Methods for Accuracy, Computation Time, and Memory Usage - As the title hints at, this blog posts compares imputation methods. So which method takes the prize? All programs outperformed others in certain areas, so it really depends on your specific needs.
- Continue reading
In a previous blog post, I demonstrated using VarSeq to directly analyze the whole genomes of 17 supercentenarians. Since then, I have been working with the variant set from these long-lived genomes to prepare a public data track useful for annotation and filtering.
Well, we just published the track last week, and I’m excited to share some of the details involved in its making.
The track, named Supercentenarian 17 Variant Frequencies, GHI, provides not only the allelic frequency of observed variants in these 17 whole genomes, but also the counts of the heterozygous and homozygous genotypes for those individuals.
For example, when investigating a rare recessive disease, its probably safe to say any variant occurring in a homozygous state in a 110 year old individual is probably not your causal disease mutation.
So what was tricky about constructing this population variant catalog?
It turns out, quite a lot.