Name: DataEngineering
Owner: Cornell Data Science
Description: The Data Engineering subteam of Cornell Data Science
Created: 2018-01-24 16:37:42.0
Updated: 2018-03-08 18:35:16.0
Pushed: 2018-03-10 07:40:27.0
Homepage:
http://cornelldata.science
Size: 7322
Language: XSLT
GitHub Committers
User | Most Recent Commit | # Commits |
Other Committers
User | Email | Most Recent Commit | # Commits |
README
Data Engineering
Who we are:
The CDS Data Engineering subteam exists to provide analysis and processing support to CDS project teams, and to develop institutional knowledge in high throughput computing.
Advisor: Professor Immanuel Trummer
Team Leads: Dae Won Kim (ORIE MENG), Haram Kim (A&S CS 2020)
Team objectives:
- Improve on existing high throughput computing frameworks
- Develop solutions for data analysis problems in CDS projects
- Provide a reservoir of reference information in data engineering
- Research and publish means of improving existing DE frameworks
Current Projects:
- Spark ML Optimization: Apache Spark's machine learning modules are not as well-studied as those of other platforms. This project seeks to empirically identify optimal settings for Spark's ML modules to best utilize the platform's unique capabilities.
- SkinnerDB Parallelization: This project's objective is to experiment with parallelism in Professor Trummer's recently developed database engine, SkinnerDB. The SkinnerDB uses a machine learning approach to query optimization, in contrast to the heuristic model used by most current database engines, but has not yet been expanded to allow multi-core execution.
- Deterministic Query Approximation: Several recent publications have outlined methods to allow high-speed query approximation with deteministic bounds, but have not yet been applied to a wide range of queries. The objective of this project is to apply several of these techniques to the TPC-H query benchmarks to demonstrate broader applicability.
- GPU Acceleration: The distributed GPU computing deals with the unique task of handling distributed deep learning tasks, which is currently well-optimized for multiple GPUs, but not necessarily across multiple machines. Our goal is to research and optimize current tools in development so that it can be adopted by CDS teams deploying large DL models.
Previous Projects:
Data streaming: Profiling of real time data streaming through Apache Kafka
Server monitoring: Real time visualization and monitoring of compute server resource utilization through Cockpit
File format optimization and profiling: Comparative analysis of a variety of file formats typically used in data science, focusing on CSVs and Apache Parquet
Spark diagnostics: Deliberate attempts to produce errors while running Apache Spark, both locally and on our servers. Problem specifics and solutions were recorded in case similar issues develop in the future.
Members (SP2018):