|
![]() |
RESEARCH PROJECTS |
Our research projects in the fields of computer-aided drug design and bioinformatics are listed as follows: |
Deriving stable microarray cancer-differentiating signatures by machine learning and feature-elimination methods, and evaluating consensus scoring of multiple random sampling and Gene-Ranking’s consistency. Signatures identified reflect disease mechanism, and can provide indicators for disease diagnosis. My current interest lies in identifying biomarkers for breast cancer and major depression. Identifying next generation innovative therapeutic targets for specific disease types, such as Obesity, Major Depression, Cancer, and so on. Collective methods are applied, which include: A. genetic sequence similarity analysis between drug-binding domains; B. computation of number of human similarity proteins, number of affiliated human pathways, and number of human tissues of a target; C. structural comparison between drug-binding domain; D. target classification based on physicochemical characteristics detected by machine learning. Led and conduct the development of bioinformatics databases, which collect information of Biology, Pharmacy, Chemistry and so on. Moreover, we are interested in constructing innovative software for drug discovery and bioinformatics, which involves design and implementation of an integrated bioinformatics software system for novel therapeutic target agent explorations. Conducting biostatistics study on the distribution of molecules with therapeutic effect, especially drugs approved and in clinical trial, across all biological species, and identifying key species for ecological protection. Comprehensive biostatistics studies on therapeutic targets in clinical trial, and comparative analysis against targets with drugs approved. Studying correlating groups of genes by utilizing graph theory for filtering complex gene correlation network. Genetic variation identified indicate complex inter- and intra-individual differences. |
DATABASE CONSTRUCTION |
Our experiences on database construction have led to several pharmacoinformatics databases as follows: |
TTD: Therapeutic Target Database
|
Extensive efforts have been directed at the discovery, investigation and clinical monitoring of targeted therapeutics. These efforts may be facilitated by the convenient access of the genetic, proteomic, interactive and other aspects of the therapeutic targets. Therefore, we developed the Therapeutic Target Database (TTD) to provide information about known and explored therapeutic protein and nucleic acid targets, the targeted disease, pathway information and the corresponding drugs directed at each of these targets. TTD was known to be one of the most popular pharmaceutical databases around the world, and included the links to relevant databases containing information about target function, sequence, 3D structure, ligand binding properties, enzyme nomenclature and drug structure, therapeutic class, and clinical development status. Our Publication(s) Describing This Database:
|
VARIDT: VARIability of Drug Transporter Database
|
The absorption, distribution and excretion of drugs are largely determined by their transporters (DTs), the variability of which has thus attracted considerable attention. There are three aspects of variability: epigenetic regulation and genetic polymorphism, species/tissue/disease-specific DT abundances, and exogenous factors modulating DT activity. The variability data of each aspect are essential for clinical study, and a collective consideration among multiple aspects becomes essential in precision medicine. However, no database is constructed to provide the comprehensive data of all aspects of DT variability. Herein, the Variability of Drug Transporter Database (VARIDT) was introduced to provide such data. First, 177 and 146 DTs were confirmed, for the first time, by the transporting drugs approved and in clinical/preclinical, respectively. Second, for the confirmed DTs, VARIDT comprehensively collected all aspects of their variability (23,947 DNA methylations, 7,317 noncoding RNA/histone regulations, 1,278 genetic polymorphisms, differential abundance profiles of 257 DTs in 21,781 patients/healthy individuals, expression of 245 DTs in 67 tissues of human/model organism, 1,225 exogenous factors altering the activity of 148 DTs), which allowed mutual connection between any aspects. Due to huge amount of accumulated data, VARIDT made it possible to generalize characteristics to reveal disease etiology and optimize clinical treatment, and is freely accessible at: https://db.idrblab.org/varidt/. Our Publication(s) Describing This Database:
|
INTEDE: Interactome of Drug-metabolizing Enzymes
|
Drug-metabolizing enzymes (DMEs) are critical determinant of drug safety and efficacy, and the interactome of DMEs has attracted extensive attention. There are 3 major interaction types in an interactome: microbiome-DME interaction (MICBIO), xenobiotics-DME interaction (XEOTIC), and host protein-DME interaction (HOSPPI). The interaction data of each type are essential for drug metabolism, and the collective consideration of multiple types has implication for the future practice of precision medicine. However, no database was designed to systematically provide the data of all types of DME interactions. Here, a database of the Interactome of Drug-Metabolizing Enzymes (INTEDE) was therefore constructed to offer these interaction data. First, 1,047 unique DMEs (448 host and 599 microbial) were confirmed, for the first time, using their metabolizing drugs. Second, for these newly confirmed DMEs, all types of their interactions (3,359 MICBIOs between 225 microbial species and 185 DMEs; 47,778 XEOTICs between 4,150 xenobiotics and 501 DMEs; 7,849 HOSPPIs between 565 human proteins and 566 DMEs) were comprehensively collected and then provided, which enabled the crosstalk analysis among multiple types. Because of the huge amount of accumulated data, the INTEDE made it possible to generalize key features for revealing disease etiology and optimizing clinical treatment. INTEDE is freely accessible at: https://idrblab.org/intede/. Our Publication(s) Describing This Database:
|
GIMICA: Host Genetic and Immune Factors Shaping Human Microbiota
|
Besides the environmental factors having tremendous impacts on the composition of microbial community, the host factors have recently gained extensive attentions on their roles in shaping human microbiota. There are two major types of host factors: host genetic factors (HGFs) and host immune factors (HIFs). These factors of each type are essential for defining the chemical and physical landscapes inhabited by microbiota, and the collective consideration of both types have great implication to serve comprehensive health management. However, no database was available to provide the comprehensive factors of both types. Herein, a database entitled ‘Host Genetic and Immune Factors Shaping Human Microbiota (GIMICA)’ was constructed. Based on the 4,257 microbes confirmed to inhabit nine sites of human body, 2,851 HGFs (1,368 single nucleotide polymorphisms (SNPs), 186 copy number variations (CNVs), and 1,297 non-coding ribonucleic acids (RNAs)) modulating the expression of 370 microbes were collected, and 549 HIFs (126 lymphocytes and phagocytes, 387 immune proteins, and 36 immune pathways) regulating the abundance of 455 microbes were also provided. All in all, GIMICA enabled the collective consideration not only between different types of host factor but also between the host and environmental ones, which is freely accessible without login requirement at: https://idrblab.org/gimica/. Our Publication(s) Describing This Database:
|
SOFTWARE DEVELOPMENT |
Our experiences on software development have led to several pharmacoinformatics servers as follows: |
NOREVA: NORmalization and EVAluation of MS-based metabolomics data
|
Diverse forms of unwanted signal variations in mass spectrometry-based metabolomics data adversely affect the accuracies of metabolic profiling. A variety of normalization methods have been developed for addressing this problem. However, their performances vary greatly and depend heavily on the nature of the studied data. Moreover, given the complexity of the actual data, it is not feasible to assess the performance of methods by single criterion. We therefore developed NOREVA to enable performance evaluation of various normalization methods from multiple perspectives. NOREVA integrated five well-established criteria (each with a distinct underlying theory) to ensure more comprehensive evaluation than any single criterion. It provided the most complete set of the available normalization methods, with unique features of removing overall unwanted variations based on quality control metabolites and allowing quality control samples based correction sequentially followed by data normalization. The originality of NOREVA and the reliability of its algorithms were extensively validated by case studies on five benchmark datasets. In sum, NOREVA is distinguished for its capability of identifying the well performed normalization method by taking multiple criteria into consideration and can be an indispensable complement to other available tools. NOREVA can be freely accessed at http://server.idrb.cqu.edu.cn/noreva/. Our Publication(s) Describing This Server:
|
ANPELA: ANalysis and PErformance-assessment of the LAbel-free proteome quantification
|
Label-free quantification (LFQ) with a specific and sequentially integrated workflow of acquisition technique, quantification tool and processing method has emerged as the popular technique employed in metaproteomic research to provide a comprehensive landscape of the adaptive response of microbes to external stimuli and their interactions with other organisms or host cells. The performance of a specific LFQ workflow is highly dependent on the studied data. Hence, it is essential to discover the most appropriate one for a specific data set. However, it is challenging to perform such discovery due to the large number of possible workflows and the multifaceted nature of the evaluation criteria. Herein, a web server ANPELA (https://idrblab.org/anpela/) was developed and validated as the first tool enabling performance assessment of whole LFQ workflow (collective assessment by five well-established criteria with distinct underlying theories), and it enabled the identification of the optimal LFQ workflow(s) by a comprehensive performance ranking. ANPELA not only automatically detects the diverse formats of data generated by all quantification tools but also provides the most complete set of processing methods among the available web servers and stand-alone tools. Systematic validation using metaproteomic benchmarks revealed ANPELA's capabilities in 1 discovering well-performing workflow(s), (2) enabling assessment from multiple perspectives and (3) validating LFQ accuracy using spiked proteins. ANPELA has a unique ability to evaluate the performance of whole LFQ workflow and enables the discovery of the optimal LFQs by the comprehensive performance ranking of all 560 workflows. Therefore, it has great potential for applications in metaproteomic and other studies requiring LFQ techniques, as many features are shared among proteomic studies. Our Publication(s) Describing This Server:
|
SSIZER: Determining the Sample Sufficiency for Comparative Biological Study
|
Comparative biomedical studies typically require plenty of samples to achieve statistically significant analysis. A frequently-encountered question is how many samples are sufficient for a particular study. This question has been traditionally assessed using the statistical power, but this assessment alone may not guarantee the full and reproducible discovery of markers truly discriminating biological groups (BMC Bioinformatics. 11: 447, 2010; Nat Rev Neurosci. 14: 365-76, 2013). Two novel types of statistical indexes have thus been introduced to assess the sample size from different perspectives by considering the diagnostic accuracy (Metabolomics. 9: 280-99, 2013) and robustness (Cancer Res. 74: 4612-21, 2014). Due to the complementary nature of these index-types, a comprehensive evaluation based on all types of indexes is necessary for more accurate assessment. However, no such tool is available yet. Herein, an online tool SSizer was developed and validated to enable the assessment of the sufficiency of a user-input biomedical dataset for given studies, and three index-types were provided for the first time to achieve the comprehensive assessment. These indexes included: (I) statistical power analyzing the level of difference between two comparative groups (Radiology. 227: 309-13, 2003), (II) overall diagnostic & classification accuracies on independent data (Metabolomics. 9: 280-99, 2013), and (III) robustness among the lists of biomarkers identified from different datasets (Cancer Res. 74:4612-21, 2014). Moreover, a sample simulation based on user-input data was performed to expand data and then determine the sample size required for given study (Anal Chem. 88: 5179-88, 2016). In sum, SSizer was unique for its capacity in comprehensively evaluating whether sample size was sufficient and determining the required number of samples for user-input dataset, which can therefore facilitate current biomedical studies including metabolomics, proteomics, and so on. SSizer is accessible free of charge at https://idrblab.org/ssizer/ Our Publication(s) Describing This Server: |
CNN-T4SE: CNN-based annotation of bacterial Type IV Secretion system Effectors
|
The type IV bacterial secretion system (SS) is reported to be one of the most ubiquitous SSs in nature, and can induce serious conditions by secreting type IV SS effectors (T4SEs) into the host cells. Recent studies mainly focus on annotating new T4SE from the huge amount of sequencing data, and various computational tools are therefore developed to accelerate T4SE annotation. However, these tools are reported as heavily dependent on the selected methods and their annotation performance need to be further enhanced. Herein, a convolution neural network (CNN) technique was used to annotate T4SEs by integrating multiple protein encoding strategies. First, the annotation accuracies of nine encoding strategies integrated with CNN were assessed and compared with that of the popular T4SE annotation tools based on independent benchmark. Second, false discovery rates (FDRs) of various models were systematically evaluated by (1) scanning the genome of Legionella pneumophila subsp. ATCC 33152 and (2) predicting the real-world non-T4SEs validated using published experiments. Based on above analyses, the encoding strategies, (a) position-specific scoring matrix (PSSM), (b) protein secondary structure & solvent accessibility (PSSSA) and (c) one-hot encoding scheme (Onehot), were identified as well-performing when integrated CNN. Finally, a novel strategy that collectively considering the three well-performing models (CNN-PSSM, CNN-PSSSA and CNN-Onehot) was proposed, and a new tool (CNN-T4SE, https://idrblab.org/cnnt4se/) was constructed to facilitate T4SE annotation. All in all, this study conducted a comprehensive analysis on the performance of a collection of encoding strategies when integrated with CNN, which could facilitate the suppression of T4SS in infection and limit the spread of antimicrobial resistance. Our Publication(s) Describing This Server: |
PROFEAT: calculation of the PROtein physicochemical FEATures
|
The studies of biological, disease, and pharmacological networks are facilitated by the systems-level investigations using computational tools. In particular, the network descriptors developed in other disciplines have found increasing applications in the study of the protein, gene regulatory, metabolic, disease, and drug-targeted networks. Facilities are provided by the public web servers for computing network descriptors, but many descriptors are not covered, including those used or useful for biological studies. We upgraded the PROFEAT web server http://bidd2.nus.edu.sg/cgi-bin/profeat2016/main.cgi for computing up to 329 network descriptors and protein-protein interaction descriptors. PROFEAT network descriptors comprehensively describe the topological and connectivity characteristics of unweighted (uniform binding constants and molecular levels), edge-weighted (varying binding constants), node-weighted (varying molecular levels), edge-node-weighted (varying binding constants and molecular levels), and directed (oriented processes) networks. The usefulness of the network descriptors is illustrated by the literature-reported studies of the biological networks derived from the genome, interactome, transcriptome, metabolome, and diseasome profiles. Our Publication(s) Describing This Server: |
SVM-Prot: SVM-based Protein functional family prediction
|
Knowledge of protein function is important for biological, medical and therapeutic studies, but many proteins are still unknown in function. There is a need for more improved functional prediction methods. Our SVM-Prot web-server employed a machine learning method for predicting protein functional families from protein sequences irrespective of similarity, which complemented those similarity-based and other methods in predicting diverse classes of proteins including the distantly-related proteins and homologous proteins of different functions. Since its publication in 2003, we made major improvements to SVM-Prot with (1) expanded coverage from 54 to 192 functional families, (2) more diverse protein descriptors protein representation, (3) improved predictive performances due to the use of more enriched training datasets and more variety of protein descriptors, (4) newly integrated BLAST analysis option for assessing proteins in the SVM-Prot predicted functional families that were similar in sequence to a query protein, and (5) newly added batch submission option for supporting the classification of multiple proteins. Moreover, 2 more machine learning approaches, K nearest neighbor and probabilistic neural networks, were added for facilitating collective assessment of protein functions by multiple methods. SVM-Prot can be accessed at http://bidd2.nus.edu.sg/cgi-bin/svmprot/svmprot.cgi. Our Publication(s) Describing This Server:
|
MMEASE: Meta-Metabolomics by Enhanced Annotation, marker Selection and Enrichment
|
Large-scale and long-term metabolomic studies have attracted widespread attention in the biomedical studies yet remain challenging despite recent technique progresses. In particular, the ineffective way of experiment integration and limited capacity in metabolite annotation are known issues. Herein, we constructed an online tool MMEASE enabling the integration of multiple analytical experiments with an enhanced metabolite annotation and enrichment analysis (https://idrblab.org/mmease/). MMEASE was unique in capable of (1) integrating multiple analytical blocks; (2) providing enriched annotation for >330 thousands of metabolites; (3) conducting enrichment analysis using various categories/sub-categories. All in all, MMEASE aimed at supplying a comprehensive service for long-term and large-scale metabolomics, which might provide valuable guidance to current biomedical studies. Our Publication(s) Describing This Server:
|
MetaFS: performance assessment for biomarker discovery in metaproteomics.
|
Metaproteomic data suffer from two unavoidable issues: dimensionality and sparsity. Data reduction methods can maximally identify the relevant subset of significant differential features and reduce data redundancy. Feature selection (FS) approaches were often applied to obtain the significant differential subset. So far, a variety of feature selection have been developed for metaproteomic study. However, due to FS’s performance depended heavily on the data characteristics of a given research, the well-suitable feature selection method must be carefully chosen for obtaining the reliable and reproducibly results of analyses. Moreover, it is critical to evaluate the performances of each FS method according to comprehensive criteria, because single criterion is not sufficient to reflect the overall level of the FS method. Therefore, we constructed the online tool named MetaFS, which provided 13 types of FS methods and conduct the comprehensive evaluation on the complex FS methods using four widely accepted and independent criteria. Furthermore, the function and reliability of MetaFS were systematically tested and validated via two case studies. In summary, MetaFS could be a distinguished tool discovering the overall well-performed FS method for selecting the potential biomarkers in microbiome studies. The online tool is freely available at https://idrblab.org/metafs/. Our Publication(s) Describing This Server: |
![]() |
|