Improve Research Reproducibility A Bio-protocol resource

Bioinformatics and Computational Biology


Categories

Protocols in Current Issue
0 Q&A 205 Views Nov 5, 2025

The rhizosphere, a 2–10 mm region surrounding the root surface, is colonized by numerous microorganisms, known as the rhizosphere microbiome. These microorganisms interact with each other, leading to emergent properties that affect plant fitness. Mapping these interactions is crucial to understanding microbial ecology in the rhizosphere and predicting and manipulating plant health. However, current methods do not capture the chemistry of the rhizosphere environment, and common plant–microbe interaction study setups do not map bacterial interactions in this niche. Additionally, studying bacterial interactions may require the creation of transgenic bacterial lines with markers for antibiotic resistance/fluorescent probes and even isotope labeling. Here, we describe a protocol for both in silico prediction and in vitro validation of bacterial interactions that closely recapitulate the major chemical constituents of the rhizosphere environment using a widely used Murashige & Skoog (MS)-based gnotobiotic plant growth system. We use the auto-fluorescent Pseudomonas, abundantly found in the rhizosphere, to estimate their interactions with other strains, thereby avoiding the need for the creation of transgenic bacterial strains. By combining artificial root exudate medium, plant cultivation medium, and a synthetic bacterial community (SynCom), we first simulate their interactions using genome-scale metabolic models (GSMMs) and then validate these interactions in vitro, using growth assays. We show that the GSMM-predicted interaction scores correlate moderately, yet significantly, with their in vitro validation. Given the complexity of interactions among rhizosphere microbiome members, this reproducible and efficient protocol will allow confident mapping of interactions of fluorescent Pseudomonas with other bacterial strains within the rhizosphere microbiome.

0 Q&A 202 Views Nov 5, 2025

DNA methylation is a crucial epigenetic modification that influences gene expression and plays a role in various biological processes. High-throughput sequencing techniques, such as bisulfite sequencing (BS-seq) and enzymatic methyl sequencing (EM-seq), enable genome-wide profiling of DNA methylation patterns with single-base resolution. In this protocol, we present a bioinformatics pipeline for analyzing genome-wide DNA methylation. We outline the step-by-step process of the essential analyses, including quality control using FASTQ for BS- and EM-seqs raw reads, read alignment with commonly used aligners such as Bowtie2 and BS-Seeker2, DNA methylation calling to generate CGmap files, identification of differentially methylated regions (DMRs) using tools including MethylC-analyzer and HOME, data visualization, and post-alignment analyses. Compared to existing workflows, this pipeline integrates multiple steps into a single protocol, lowering the technical barrier, improving reproducibility, and offering flexibility for both plant and animal methylome studies. To illustrate the application of BS-seq and EM-seq, we demonstrate a case study on analyzing a mutant in Arabidopsis thaliana with a mutation in the met1 gene, which encodes a DNA methyltransferase, and results in global CG hypomethylation and altered gene regulation. This example highlights the biological insights that can be gained through systematic methylome analysis. Our workflow is adaptable to any organism with a reference genome and provides a robust framework for uncovering methylation-associated regulatory mechanisms. All scripts and detailed instructions are provided in GitHub repository: https://github.com/PaoyangLab/Methylation_Analysis.

Protocols in Past Issues
0 Q&A 806 Views Oct 5, 2025

High-content analysis (HCA) is a powerful image-based approach for phenotypic profiling and drug discovery, enabling the extraction of multiparametric data from individual cells. Traditional HCA protocols often rely on fixed-cell imaging, with assays like cell painting widely adopted as standard. While these methods provide rich morphological information, the integration of live-cell imaging expands analytical capabilities by enabling the study of dynamic biological processes and real-time cellular responses. This protocol presents a simple, cost-effective, and scalable method for live-cell HCA using acridine orange (AO), a metachromatic fluorescent dye that highlights cellular organization by staining nucleic acids and acidic compartments. The assay provides visualization of distinct subcellular structures, including nuclei and cytoplasmic organelles, using a two-channel fluorescence readout. Compatible with high-throughput microscopy and computational analysis, the method supports diverse applications such as phenotypic screening, cytotoxicity assessment, and morphological profiling. By preserving cell viability and enabling dynamic, real-time measurements, this live-cell imaging approach complements existing fixed-cell assays and offers a versatile platform for uncovering complex cellular phenotypes.

0 Q&A 1003 Views Sep 20, 2025

Weighted gene co-expression network analysis (WGCNA) is widely used in transcriptomic studies to identify groups of highly correlated genes, aiding in the understanding of disease mechanisms. Although numerous protocols exist for constructing WGCNA networks from gene expression data, many focus on single datasets and do not address how to compare module stability across conditions. Here, we present a protocol for constructing and comparing WGCNA modules in paired tumor and normal datasets, enabling the identification of modules involved in both core biological processes and those specifically related to cancer pathogenesis. By incorporating module preservation analysis, this approach allows researchers to gain deeper insights into the molecular underpinnings of oral cancer, as well as other diseases. Overall, this protocol provides a framework for module preservation analysis in paired datasets, enabling researchers to identify which gene co-expression modules are conserved or disrupted between conditions, thereby advancing our understanding of disease-specific vs. universal biological processes.

0 Q&A 2550 Views Sep 5, 2025

Chromatin-associated RNAs (caRNAs) have been increasingly recognized as key regulators of gene expression and genome architecture. A few technologies, such as ChRD-PET and RedChIP, have emerged to assess protein-mediated RNA–chromatin interactions, but each has limitations. Here, we describe the TaDRIM-seq (targeted DNA-associated RNA and RNA–RNA interaction mapping by sequencing) technique, which combines Protein G (PG)-Tn5-targeted DNA tagmentation with in situ proximity ligation to simultaneously profile caRNAs across genomic regions and capture global RNA–RNA interactions within intact nuclei. This approach reduces the required cell input, shortens the experimental duration compared to existing protocols, and is applicable to both mammalian and plant systems.

0 Q&A 1141 Views Aug 5, 2025

Thousands of RNAs are localized to specific subcellular locations, and these localization patterns are often required for optimal cell function. However, the sequences within RNAs that direct their transport are unknown for almost all localized transcripts. Similarly, the RNA content of most subcellular locations remains unknown. To facilitate the study of subcellular transcriptomes, we developed the RNA proximity labeling method OINC-seq. OINC-seq utilizes photoactivatable, spatially restricted RNA oxidation to specifically label RNA in proximity to a subcellularly localized bait protein. After labeling, these oxidative RNA marks are then read out via high-throughput sequencing due to their ability to induce predictable misincorporation events by reverse transcriptase. These induced mutations are then quantitatively assessed for each gene using our software package PIGPEN. The observed mutation rate for a given RNA species is therefore related to its proximity to the localized bait protein. This protocol describes procedures for assaying RNA localization via OINC-seq experiments as well as computational procedures for analyzing the resulting data using PIGPEN.

0 Q&A 1610 Views Jul 20, 2025

Transcriptional pausing dynamically regulates spatiotemporal gene expression during cellular differentiation, development, and environmental adaptation. Precise measurement of pausing duration, a critical parameter in transcriptional control, has been challenging due to limitations in resolution and confounding factors. We introduce Fast TV-PRO-seq, an optimized protocol built on time-variant precision run-on sequencing (TV-PRO-seq), which enables genome-wide, single-base resolution mapping of RNA polymerase II pausing times. Unlike standard PRO-seq, Fast TV-PRO-seq employs sarkosyl-free biotin-NTP run-on with time gradients and integrates on-bead enzymatic reactions to streamline workflows. Key improvements include (1) reducing experimental time from 4 to 2 days, (2) reducing cell input requirements, and (3) improved process efficiency and simplified command-line operations through the use of bash scripts.

0 Q&A 1282 Views Jul 20, 2025

The root meristem navigates the highly variable soil environment where water availability limits water absorption, slowing or halting growth. Traditional studies use uniform high osmotic potentials, poorly representing natural conditions where roots gradually encounter increasing osmotic potentials. Uniform high osmotic potentials reduce root growth by inhibiting cell division and shortening mature cell length. This protocol describes a simple and effective in vitro system using a gradient mixer that generates a vertical gradient in an agar gel based on the principle of communicating vessels, exploiting gravity to generate a continuous mannitol concentration gradient (from 0 to 400 mM mannitol) reaching osmotic potentials of -1,2 MPa. It enables long-term Arabidopsis root growth analysis under progressive water deficit, improving phenotyping and molecular studies in soil-like conditions.

0 Q&A 1702 Views Jul 5, 2025

The complexity of the human transcriptome poses significant challenges for complete annotation. Traditional RNA-seq, often limited by sensitivity and short read lengths, is frequently inadequate for identifying low-abundant transcripts and resolving complex populations of transcript isoforms. Direct long-read sequencing, while offering full-length information, suffers from throughput limitations, hindering the capture of low-abundance transcripts. To address these challenges, we introduce a targeted RNA enrichment strategy, rapid amplification of cDNA ends coupled with Nanopore sequencing (RACE-Nano-Seq). This method unravels the deep complexity of transcripts containing anchor sequences—specific regions of interest that might be exons of annotated genes, in silico predicted exons, or other sequences. RACE-Nano-Seq is based on inverse PCR with primers targeting these anchor regions to enrich the corresponding transcripts in both 5' and 3' directions. This method can be scaled for high-throughput transcriptome profiling by using multiplexing strategies. Through targeted RNA enrichment and full-length sequencing, RACE-Nano-Seq enables accurate and comprehensive profiling of low-abundance transcripts, often revealing complex transcript profiles at the targeted loci, both annotated and unannotated.

0 Q&A 1386 Views Jul 5, 2025

Since the creation of the Global Polio Eradication Initiative (GPEI) in 1988, significant progress has been made toward attaining a poliovirus-free world. This has resulted in the eradication of wild poliovirus (WPV) serotypes two (WPV2) and three (WPV3) and limited transmission of serotype one (WPV1) in Pakistan and Afghanistan. However, the increased emergence of circulating vaccine-derived poliovirus (cVDPV) and the continued circulation of WPV1, although limited to two countries, pose a continuous threat of international spread of poliovirus. These challenges highlight the need to further strengthen surveillance and outbreak responses, particularly in the African Region (AFRO). Phylogeographic visualization tools may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective supplementary immunization activities and improved outbreak response and surveillance. We created a comprehensive protocol for the phylogeographic analysis of polioviruses using Nextstrain, a powerful open-source tool for real-time interactive visualization of virus sequencing data. It is expected that this protocol will support poliovirus elimination strategies in AFRO and contribute significantly to global eradication strategies. These tools have been utilized for other pathogens of public health importance, for example, SARS-CoV-2, human influenza, Ebola, and Mpox, among others, through real-time tracking of pathogen evolution (https://nextstrain.org), harnessing the scientific and public health potential of pathogen genome data.

0 Q&A 2199 Views Jun 20, 2025

Epithelial tissues form barriers to the flow of ions, nutrients, waste products, bacteria, and viruses. The conventional electrophysiology measurement of transepithelial resistance (TEER/TER) can quantify epithelial barrier integrity, but does not capture all the electrical behavior of the tissue or provide insight into membrane-specific properties. Electrochemical impedance spectroscopy, in addition to measurement of TER, enables measurement of transepithelial capacitance (TEC) and a ratio of electrical time constants for the tissue, which we term the membrane ratio. This protocol describes how to perform galvanostatic electrochemical impedance spectroscopy on epithelia using commercially available cell culture inserts and chambers, detailing the apparatus, electrical signal, fitting technique, and error quantification. The measurement can be performed in under 1 min on commercially available cell culture inserts and electrophysiology chambers using instrumentation capable of galvanostatic sinusoidal signal processing (4 μA amplitude, 2 Hz to 50 kHz). All fits to the model have less than 10 Ω mean absolute error, revealing repeatable values distinct for each cell type. On representative retinal pigment (n = 3) and bronchiolar epithelial samples (n = 4), TER measurements were 500–667 Ω·cm2 and 955–1,034 Ω·cm2 (within the expected range), TEC measurements were 3.65–4.10 μF/cm2 and 1.07–1.10 μF/cm2, and membrane ratio measurements were 18–22 and 1.9–2.2, respectively.

0 Q&A 1283 Views May 20, 2025

Normative mapping is a framework used to map population-level features of health-related variables. It is widely used in neuroscience research, but the literature lacks established protocols in modalities that do not support healthy control measurements, such as intracranial electroencephalograms (icEEG). An icEEG normative map would allow researchers to learn about population-level brain activity and enable the comparison of individual data against these norms to identify abnormalities. Currently, no standardised guide exists for transforming clinical data into a normative, regional icEEG map. Papers often cite different software and numerous articles to summarise the lengthy method, making it laborious for other researchers to understand or apply the process. Our protocol seeks to fill this gap by providing a dataflow guide and key decision points that summarise existing methods. This protocol was heavily used in published works from our own lab (twelve peer-reviewed journal publications). Briefly, we take as input the icEEG recordings and neuroimaging data from people with epilepsy who are undergoing evaluation for resective surgery. As final outputs, we obtain a normative icEEG map, comprising signal properties localised to brain regions. Optionally, we can also process new subjects through the same pipeline and obtain their z-scores (or centiles) in each brain region for abnormality detection and localisation. To date, a single, cohesive dataflow pipeline for generating normative icEEG maps, along with abnormality mapping, has not been created. We envisage that this dataflow guide will not only increase understanding and application of normative mapping methods but will also improve the consistency and quality of studies in the field.




We use cookies to improve your user experience on this site. By using our website, you agree to the storage of cookies on your computer.