Collaborations

Dr. Cathie Aime
The Aime lab focuses on the earliest diverging lineages of Basidiomycota (Pucciniomycotina, Ustilaginomycotina, and Wallemiomycetes) and on the biodiversity of basidiomycetes in tropical ecosystems.
Dr. Aime needed a new way to make Purdue's Arther Fungarium database, an important collection of Rust Fungi genetic sequences, available to a broad range of researchers. We created a custom Basic Alignment Local Search Tool (BLAST) on RCAC’s Geddes platform giving them a low cost, light weight
solution.

Dr. Shalamar Armstrong
The Soil Environment Nutrient Dynamic (SEND) lab focuses on soil conservation and management, soil health, nutrient management, and water quality.
The SEND lab collects data from farm fields across IL and IN, and needed a way to share that data with stakeholders. Working with Dr. Armstrong, we developed workflows and data architecture that ensure data from the field and the lab flows into a unified environment. Key data was integrated into a database, and a web app created to lead stakeholders through their research findings.

Dr. Brock Harpur
The Harpur lab focuses on understanding eusocial insect lineage evolution and mechanisms of adaptation. Genomic data is used to support the beekeeping public and industry at large.
Dr. Harpur wanted to automate the acquisition, alignment, and analysis of bee genetic sequences. We combined several processes into pipelines that are connected using the NextFlow workflow management system. This standardizes the collection, organization, and analysis of over 5,000 sequences. This provides continuously updated data that is consistent across projects.

Dr. Jackie Boerman
The Boerman lab focuses on Dairy cattle nutrition and management.
DR. LOUIS BRITO
The Brito lab is interested in traits of animal behavior and welfare, environmental efficiency and adaptation to challenging environments.
They needed to integrate a large amount of farm data from six different sources to answer new research questions. Together with their labs and AgIT, we designed pipelines to bring the data into a distributed file system. Custom Jupyter notebooks allow them to easily leverage the Spark Analytic engine using Python or R.