MIT and University of Washington Workshop on AI Implementations and Applications: ML Architecture, Systems and Programming Environments - Day 1
June 11, 2021 at 9am-1pm PDT.
June 11, 2021 at 9am-1pm PDT.
Organizers (left to right): Prof. Arvind Krishnamurthy (UW), Prof. Manya Ghobadi (MIT), and Prof. Mohammad Alizadeh (MIT)
Agenda - Videos of Presentations
9:00 - 9:05 AM PDT - Welcome - Conference Organizers
9:05 - 9:35 AM - Zhizhen Zhong (MIT): Arrow and Bow: Bayesian Optimization System for Reconfigurable Wide-Area Networks
9:35 - 10:05 AM - Mehrdad Khani (MIT): SiP-ML: High-Bandwidth Optical Network
Interconnects for Machine Learning Training
10:05 - 10:35 AM - Parimarjan Negi (MIT): Flow-Loss: Learning Cardinality Estimates that
Matter to a Query Optimizer
10:35 - 11:00 AM - Break on Gather Online
11:00 - 11:30 AM - Yuchen Jin (OctoML - work done while at UW): AutoLRS: Automatic
Learning-Rate Schedule by Bayesian Optimization on the Fly
11:30 - 12:00 PM - Ryan Marcus (MIT): Machine Learning for Query Optimization
12:00 - 12:30 PM - Liang Luo (Facebook - work done while at UW): Throughput And Cost
Optimizer for PublicCloud-based Distributed Training
12:30 - 12:35 PM - Wrap-Up
12:35 - 1:00 PM - Open Networking and Discussion Time on Gather Online
Speaker Abstracts and Bios (in alphabetical order by last name)
Yuchen Jin (OctoML - work done while at UW): AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Abstract: The learning rate (LR) schedule is one of the most important hyper-parameters needing careful tuning in training DNNs. However, it is also one of the least automated parts of machine learning systems and usually costs significant manual effort and computing. Though there are pre-defined LR schedules and optimizers with adaptive LR, they introduce new hyperparameters that need to be tuned separately for different tasks/datasets. In this paper, we consider the question: Can we automatically tune the LR over the course of training without human involvement?
We propose an efficient method, AutoLRS, which automatically optimizes the LR for each training stage by modeling training dynamics. AutoLRS aims to find an LR applied to every τ steps that minimizes the resulted validation loss. We solve this black-box optimization on the fly by Bayesian optimization (BO). However, collecting training instances for BO requires a system to evaluate each LR queried by BO’s acquisition function for τ steps, which is prohibitively expensive in practice. Instead, we apply each candidate LR for only τ′ ≪ τ steps and train an exponential model to predict the validation loss after τ steps. This mutual-training process between BO and the loss-prediction model allows us to limit the training steps invested in the BO search. We demonstrate the advantages and the generality of AutoLRS through extensive experiments of training DNNs for tasks from diverse domains using different optimizers. The LR schedules auto-generated by AutoLRS lead to a speedup of 1.22×, 1.43×, and 1.5× when training ResNet-50, Transformer, and BERT, respectively, compared to the LR schedules in their original papers, and an average speedup of 1.31× over state-of-the-art heavily-tuned LR schedules.
Bio: Yuchen Jin is an MLSys engineer at OctoML. He graduated from University of Washington recently, where he worked with Prof. Arvind Krishnamurthy on machine learning systems and networked systems.
Mehrdad Khani (MIT): SiP-ML: High-Bandwidth Optical Network Interconnects for Machine Learning Training
Abstract: The computation requirements of large ML models have been partly met by the rapid development of ML hardware accelerators and specialized software stacks. Although hardware accelerators have provided a significant speed-up, today’s training tasks can still take days and even weeks. Solutions as NVIDIA DGX enable distributed training on a small number of GPUs (e.g., 8–16) connected with a high-speed electrical switch with Tbps bandwidth, while large-scale ML clusters resort to connecting GPU servers over much slower infiniband fabrics. We argue that future distributed ML training workloads are likely to require several Tbps of bandwidth per device at large scales, creating a pressing need for entirely new ways to build interconnects for distributed ML systems.
We propose optical network interconnects as a key enabler for building high-bandwidth ML training clusters with strong scaling properties. Our design, called SiP-ML, enables accelerating the training time of popular DNN models using silicon-photonics I/O capable of providing multiple terabits-per-second of bandwidth per GPU.
Bio: Mehrdad is a Ph.D. student at MIT CSAIL, advised by Prof. Mohammad Alizadeh. His research interests are in the areas of computer networks, systems, and applied machine learning. His current research focuses on machine learning at the edge, large-scale distributed training, and video streaming. Prior to MIT, Mehrdad completed his dual B.Sc. degree in electrical engineering and computer science from the Sharif University of Technology, Iran, in 2016.
Liang Luo (Facebook - work done while at UW): Throughput And Cost Optimizer for PublicCloud-based Distributed Training
Abstract: Cost-efficiency and training time are primary concerns in cloud-based distributed training today. Finding the best VM configuration is the key to low cost and high throughput training. However, optimal VM selection with user constraints requires efficiently navigating a large search space of different VM families, sizes, counts, billing modes, and heterogeneity, while controlling for the significant performance variance associated with oversubscribed and dynamically shared cloud instances and networks.
In this work, we characterize the compute and communication performance variation in a public cloud environment in the context of distributed training and present a comprehensive throughput and cost-efficiency study across a wide array of instance choices to help prune the optimal VM search space. Using insights from these studies, we build TACO(Throughput And Cost Optimizer), a system that combines runtime profiling with learned performance models to accurately predict training performance and find the best choice of VMs that satisfy user constraints. Notably, TACO can leverage both heterogeneous setups and spot instances to meet user constraints.
We integrate TACO with Pytorch, and evaluate it on an unmodified Amazon EC2 cloud environment. Our results show that TACO achieves an end-to-end iteration latency prediction error of 8.3% across various deep learning models, and its VM instance recommendations can offer up to twice the throughput and half of the cost compared to state-of-the-art baselines in real-world scenarios.
Bio: Liang Luo is a research scientist at Facebook AI System Codesign team. Prior to graduation, he was a graduate student at University of Washington working with Prof.Luis Ceze and Prof.Arvind Krishnamurthy on distributed learning systems.
Ryan Marcus (MIT): Machine Learning for Query Optimization
Abstract: Over the past several decades, data management systems have developed a complex suite of human-engineered heuristics to support advanced tasks. Could machine learning replace these heuristics, much as learned models replaced hand-tuned heuristics in computer vision and natural language processing? This talk will explore possibilities for replacing or improving those heuristics with machine learning in the context of query optimization.
Bio: Ryan Marcus is a postdoc at MIT CSAIL working with Tim Kraska, as well as a scientist at Intel Labs. Ryan's work focuses on applications of machine learning to systems, especially database systems. You can find out more about him on his website: https://rmarcus.info
Parimarjan Negi (MIT): Flow-Loss: Learning Cardinality Estimates that Matter to a Query Optimizer
Abstract: Recently there has been significant interest in using machine learning to improve the accuracy of cardinality estimation. This work has focused on improving average estimation error, but not all estimates matter equally for downstream tasks like query optimization. Since learned models inevitably make mistakes, the goal should be to improve the estimates that make the biggest difference to an optimizer. We introduce a new loss function, Flow-Loss, for learning cardinality estimation models. Flow-Loss approximates the optimizer’s cost model and search algorithm with analytical functions, which it uses to optimize explicitly for better query plans. At the heart of Flow-Loss is a reduction of query optimization to a flow routing problem on a certain “plan graph”, in which different paths correspond to different query plans. To evaluate our approach, we introduce the Cardinality Estimation Benchmark (CEB) which contains the ground truth cardinalities for sub-plans of over 16K queries from 21 templates with up to 15 joins. We show that across different architectures and databases, a model trained with Flow-Loss improves the plan costs and query runtimes despite having worse estimation accuracy than a model trained with Q-Error. When the test set queries closely match the training queries, both models perform well. However, the Q-Error-trained model degrades significantly when evaluated on slightly different queries (e.g., similar but unseen query templates), while the FlowLoss-trained model generalizes better to such situations achieving 4 − 8x better 99th percentile runtimes on unseen templates with the same model architecture and training data.
Bio: Parimarjan is a third year PhD student at MIT advised by Professor Mohammad Alizadeh. He also works closely with Professor Tim Kraska, and the Gray Systems Laboratory team led by Alekh Jindal at Microsoft. He has been working on introducing learned components to computer systems, particularly database query optimizers. Before MIT, Parimarjan did his Bachelors in Math from Stanford.
Zhizhen Zhong (MIT): Arrow and Bow: Bayesian Optimization System for Reconfigurable Wide-Area Networks
Abstract: Reconfigurable networks in the physical layer is the next frontier for full-stack software-defined networking. The complex physics of signal amplification and impairment in wide-area fiber optics hinders the physical layer reconfigurability in today’s Internet, making adding and removing wavelengths a slow and vendor-proprietary process. We leverage Bayesian Optimization (BO) as a sample-efficient framework to optimize this “black-box” system. We demonstrate a practical BO system for wavelength deployment at Facebook optical backbone. It is open-source, compatible with any vendor, and achieves 4.76× faster wavelength deployment than state-of-the-art. We further propose a novel cross-layer Traffic Engineering (TE) system that considers reconfigurable physical layer in its mathematical formulation but remain computational tractable to optimize traffic allocations periodically at scale. Our TE system can support 56% more traffic without compromising service availability. Our work is being deployed at Facebook.
Bio: Zhizhen Zhong is a postdoc researcher at MIT CSAIL with Prof. Manya Ghobadi. He works in the intersection of networked systems and applied optics for next-generation intelligent backbone networks and high-performance AI computing systems. Before joining MIT, he was a visiting researcher at Facebook. He received both his PhD and bachelor degree in Electronic Engineering from Tsinghua University in 2019 and 2014, respectively.
QUOTES FROM PREVIOUS CLOUD WORKSHOPS
Professor Ken Birman, the N. Rama Rao Professor of Computer Science, Cornell University, “I actually thought it was a fantastic workshop, an unquestionable success, starting from the dinner the night before, through the workshop itself, to the post-event reception for the student Best Poster Awards.”
Professor David Patterson, the Pardee Professor of Computer Science, UC Berkeley, “I saw strong participation at the Cloud Workshop, with some high energy and enthusiasm; and I was delighted to see industry engineers bring and describe actual hardware, representing some of the newest innovations in the data center.”
Professor Christos Kozyrakis, Professor of Electrical Engineering & Computer Science, Stanford University, “As a starting point, I think of these IAP workshops as ‘Hot Chips meets ISCA’, i.e., an intersection of industry’s newest solutions in hardware (Hot Chips) with academic research in computer architecture (ISCA); but more so, these workshops additionally cover new subsystems and applications, and in a smaller venue where it is easy to discuss ideas and cross-cutting approaches with colleagues.”
Professor Hakim Weatherspoon, Professor of Computer Science, Cornell University, “I have participated in three IAP Workshops since the first one at Cornell in 2013 and it is great to see that the IAP premise was a success now as it was then, bringing together industry and academia in a focused workshop and an all-day exchange of ideas. It was a fantastic experience and I look forward to the next IAP Workshop.”
Dr. Carole-Jean Wu, Research Scientist, AI Infrastructure, Facebook Research, and Professor of CSE, Arizona State University, “The IAP Cloud Computing workshop provides a great channel for valuable interactions between faculty/students and the industry participants. I truly enjoyed the venue learning about research problems and solutions that are of great interest to Facebook, as well as the new enabling technologies from the industry representatives. The smaller venue and the poster session fostered an interactive environment for in-depth discussions on the proposed research and approaches and sparked new collaborative opportunities. Thank you for organizing this wonderful event! It was very well run.”
Nathan Pemberton, PhD student, UC Berkeley, "IAP workshops provide a valuable chance to explore emerging research topics with a focused group of participants, and without all the time/effort of a full-scale conference. Instead of rushing from talk to talk, you can slow down and dive deep into a few topics with experts in the field."
Vishal Shrivastav, PhD Student, Cornell University, Best Poster Award Winner, “Attending the IAP workshop was a great experience and very rewarding. I really enjoyed the many amazing talks from both the industry and academia. My personal conversations with several industry leaders at the workshop will definitely guide some of my future research."
Ana Klimovic, Research Scientist at Google Brain, “I have attended three IAP workshops and I am consistently impressed by the quality of the talks and the breadth of the topics covered. These workshops bring top-tier industry and academia together to discuss cutting-edge research challenges. It is a great opportunity to exchange ideas and get inspiration for new research opportunities."
Dr. Pankaj Mehra, VP of Product Planning, Samsung, "Terrific job organizing the Workshop that gave all parties -- students, faculty, industry -- something worthwhile to take back."
Dr. Richard New, VP Research, Western Digital, “IAP workshops provide a great opportunity to meet with professors and students working at the cutting edge of their fields. It was a pleasure to attend the event – lots of very interesting presentations and posters.”
Agenda - Videos of Presentations
9:00 - 9:05 AM PDT - Welcome - Conference Organizers
9:05 - 9:35 AM - Zhizhen Zhong (MIT): Arrow and Bow: Bayesian Optimization System for Reconfigurable Wide-Area Networks
9:35 - 10:05 AM - Mehrdad Khani (MIT): SiP-ML: High-Bandwidth Optical Network
Interconnects for Machine Learning Training
10:05 - 10:35 AM - Parimarjan Negi (MIT): Flow-Loss: Learning Cardinality Estimates that
Matter to a Query Optimizer
10:35 - 11:00 AM - Break on Gather Online
11:00 - 11:30 AM - Yuchen Jin (OctoML - work done while at UW): AutoLRS: Automatic
Learning-Rate Schedule by Bayesian Optimization on the Fly
11:30 - 12:00 PM - Ryan Marcus (MIT): Machine Learning for Query Optimization
12:00 - 12:30 PM - Liang Luo (Facebook - work done while at UW): Throughput And Cost
Optimizer for PublicCloud-based Distributed Training
12:30 - 12:35 PM - Wrap-Up
12:35 - 1:00 PM - Open Networking and Discussion Time on Gather Online
Speaker Abstracts and Bios (in alphabetical order by last name)
Yuchen Jin (OctoML - work done while at UW): AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Abstract: The learning rate (LR) schedule is one of the most important hyper-parameters needing careful tuning in training DNNs. However, it is also one of the least automated parts of machine learning systems and usually costs significant manual effort and computing. Though there are pre-defined LR schedules and optimizers with adaptive LR, they introduce new hyperparameters that need to be tuned separately for different tasks/datasets. In this paper, we consider the question: Can we automatically tune the LR over the course of training without human involvement?
We propose an efficient method, AutoLRS, which automatically optimizes the LR for each training stage by modeling training dynamics. AutoLRS aims to find an LR applied to every τ steps that minimizes the resulted validation loss. We solve this black-box optimization on the fly by Bayesian optimization (BO). However, collecting training instances for BO requires a system to evaluate each LR queried by BO’s acquisition function for τ steps, which is prohibitively expensive in practice. Instead, we apply each candidate LR for only τ′ ≪ τ steps and train an exponential model to predict the validation loss after τ steps. This mutual-training process between BO and the loss-prediction model allows us to limit the training steps invested in the BO search. We demonstrate the advantages and the generality of AutoLRS through extensive experiments of training DNNs for tasks from diverse domains using different optimizers. The LR schedules auto-generated by AutoLRS lead to a speedup of 1.22×, 1.43×, and 1.5× when training ResNet-50, Transformer, and BERT, respectively, compared to the LR schedules in their original papers, and an average speedup of 1.31× over state-of-the-art heavily-tuned LR schedules.
Bio: Yuchen Jin is an MLSys engineer at OctoML. He graduated from University of Washington recently, where he worked with Prof. Arvind Krishnamurthy on machine learning systems and networked systems.
Mehrdad Khani (MIT): SiP-ML: High-Bandwidth Optical Network Interconnects for Machine Learning Training
Abstract: The computation requirements of large ML models have been partly met by the rapid development of ML hardware accelerators and specialized software stacks. Although hardware accelerators have provided a significant speed-up, today’s training tasks can still take days and even weeks. Solutions as NVIDIA DGX enable distributed training on a small number of GPUs (e.g., 8–16) connected with a high-speed electrical switch with Tbps bandwidth, while large-scale ML clusters resort to connecting GPU servers over much slower infiniband fabrics. We argue that future distributed ML training workloads are likely to require several Tbps of bandwidth per device at large scales, creating a pressing need for entirely new ways to build interconnects for distributed ML systems.
We propose optical network interconnects as a key enabler for building high-bandwidth ML training clusters with strong scaling properties. Our design, called SiP-ML, enables accelerating the training time of popular DNN models using silicon-photonics I/O capable of providing multiple terabits-per-second of bandwidth per GPU.
Bio: Mehrdad is a Ph.D. student at MIT CSAIL, advised by Prof. Mohammad Alizadeh. His research interests are in the areas of computer networks, systems, and applied machine learning. His current research focuses on machine learning at the edge, large-scale distributed training, and video streaming. Prior to MIT, Mehrdad completed his dual B.Sc. degree in electrical engineering and computer science from the Sharif University of Technology, Iran, in 2016.
Liang Luo (Facebook - work done while at UW): Throughput And Cost Optimizer for PublicCloud-based Distributed Training
Abstract: Cost-efficiency and training time are primary concerns in cloud-based distributed training today. Finding the best VM configuration is the key to low cost and high throughput training. However, optimal VM selection with user constraints requires efficiently navigating a large search space of different VM families, sizes, counts, billing modes, and heterogeneity, while controlling for the significant performance variance associated with oversubscribed and dynamically shared cloud instances and networks.
In this work, we characterize the compute and communication performance variation in a public cloud environment in the context of distributed training and present a comprehensive throughput and cost-efficiency study across a wide array of instance choices to help prune the optimal VM search space. Using insights from these studies, we build TACO(Throughput And Cost Optimizer), a system that combines runtime profiling with learned performance models to accurately predict training performance and find the best choice of VMs that satisfy user constraints. Notably, TACO can leverage both heterogeneous setups and spot instances to meet user constraints.
We integrate TACO with Pytorch, and evaluate it on an unmodified Amazon EC2 cloud environment. Our results show that TACO achieves an end-to-end iteration latency prediction error of 8.3% across various deep learning models, and its VM instance recommendations can offer up to twice the throughput and half of the cost compared to state-of-the-art baselines in real-world scenarios.
Bio: Liang Luo is a research scientist at Facebook AI System Codesign team. Prior to graduation, he was a graduate student at University of Washington working with Prof.Luis Ceze and Prof.Arvind Krishnamurthy on distributed learning systems.
Ryan Marcus (MIT): Machine Learning for Query Optimization
Abstract: Over the past several decades, data management systems have developed a complex suite of human-engineered heuristics to support advanced tasks. Could machine learning replace these heuristics, much as learned models replaced hand-tuned heuristics in computer vision and natural language processing? This talk will explore possibilities for replacing or improving those heuristics with machine learning in the context of query optimization.
Bio: Ryan Marcus is a postdoc at MIT CSAIL working with Tim Kraska, as well as a scientist at Intel Labs. Ryan's work focuses on applications of machine learning to systems, especially database systems. You can find out more about him on his website: https://rmarcus.info
Parimarjan Negi (MIT): Flow-Loss: Learning Cardinality Estimates that Matter to a Query Optimizer
Abstract: Recently there has been significant interest in using machine learning to improve the accuracy of cardinality estimation. This work has focused on improving average estimation error, but not all estimates matter equally for downstream tasks like query optimization. Since learned models inevitably make mistakes, the goal should be to improve the estimates that make the biggest difference to an optimizer. We introduce a new loss function, Flow-Loss, for learning cardinality estimation models. Flow-Loss approximates the optimizer’s cost model and search algorithm with analytical functions, which it uses to optimize explicitly for better query plans. At the heart of Flow-Loss is a reduction of query optimization to a flow routing problem on a certain “plan graph”, in which different paths correspond to different query plans. To evaluate our approach, we introduce the Cardinality Estimation Benchmark (CEB) which contains the ground truth cardinalities for sub-plans of over 16K queries from 21 templates with up to 15 joins. We show that across different architectures and databases, a model trained with Flow-Loss improves the plan costs and query runtimes despite having worse estimation accuracy than a model trained with Q-Error. When the test set queries closely match the training queries, both models perform well. However, the Q-Error-trained model degrades significantly when evaluated on slightly different queries (e.g., similar but unseen query templates), while the FlowLoss-trained model generalizes better to such situations achieving 4 − 8x better 99th percentile runtimes on unseen templates with the same model architecture and training data.
Bio: Parimarjan is a third year PhD student at MIT advised by Professor Mohammad Alizadeh. He also works closely with Professor Tim Kraska, and the Gray Systems Laboratory team led by Alekh Jindal at Microsoft. He has been working on introducing learned components to computer systems, particularly database query optimizers. Before MIT, Parimarjan did his Bachelors in Math from Stanford.
Zhizhen Zhong (MIT): Arrow and Bow: Bayesian Optimization System for Reconfigurable Wide-Area Networks
Abstract: Reconfigurable networks in the physical layer is the next frontier for full-stack software-defined networking. The complex physics of signal amplification and impairment in wide-area fiber optics hinders the physical layer reconfigurability in today’s Internet, making adding and removing wavelengths a slow and vendor-proprietary process. We leverage Bayesian Optimization (BO) as a sample-efficient framework to optimize this “black-box” system. We demonstrate a practical BO system for wavelength deployment at Facebook optical backbone. It is open-source, compatible with any vendor, and achieves 4.76× faster wavelength deployment than state-of-the-art. We further propose a novel cross-layer Traffic Engineering (TE) system that considers reconfigurable physical layer in its mathematical formulation but remain computational tractable to optimize traffic allocations periodically at scale. Our TE system can support 56% more traffic without compromising service availability. Our work is being deployed at Facebook.
Bio: Zhizhen Zhong is a postdoc researcher at MIT CSAIL with Prof. Manya Ghobadi. He works in the intersection of networked systems and applied optics for next-generation intelligent backbone networks and high-performance AI computing systems. Before joining MIT, he was a visiting researcher at Facebook. He received both his PhD and bachelor degree in Electronic Engineering from Tsinghua University in 2019 and 2014, respectively.
QUOTES FROM PREVIOUS CLOUD WORKSHOPS
Professor Ken Birman, the N. Rama Rao Professor of Computer Science, Cornell University, “I actually thought it was a fantastic workshop, an unquestionable success, starting from the dinner the night before, through the workshop itself, to the post-event reception for the student Best Poster Awards.”
Professor David Patterson, the Pardee Professor of Computer Science, UC Berkeley, “I saw strong participation at the Cloud Workshop, with some high energy and enthusiasm; and I was delighted to see industry engineers bring and describe actual hardware, representing some of the newest innovations in the data center.”
Professor Christos Kozyrakis, Professor of Electrical Engineering & Computer Science, Stanford University, “As a starting point, I think of these IAP workshops as ‘Hot Chips meets ISCA’, i.e., an intersection of industry’s newest solutions in hardware (Hot Chips) with academic research in computer architecture (ISCA); but more so, these workshops additionally cover new subsystems and applications, and in a smaller venue where it is easy to discuss ideas and cross-cutting approaches with colleagues.”
Professor Hakim Weatherspoon, Professor of Computer Science, Cornell University, “I have participated in three IAP Workshops since the first one at Cornell in 2013 and it is great to see that the IAP premise was a success now as it was then, bringing together industry and academia in a focused workshop and an all-day exchange of ideas. It was a fantastic experience and I look forward to the next IAP Workshop.”
Dr. Carole-Jean Wu, Research Scientist, AI Infrastructure, Facebook Research, and Professor of CSE, Arizona State University, “The IAP Cloud Computing workshop provides a great channel for valuable interactions between faculty/students and the industry participants. I truly enjoyed the venue learning about research problems and solutions that are of great interest to Facebook, as well as the new enabling technologies from the industry representatives. The smaller venue and the poster session fostered an interactive environment for in-depth discussions on the proposed research and approaches and sparked new collaborative opportunities. Thank you for organizing this wonderful event! It was very well run.”
Nathan Pemberton, PhD student, UC Berkeley, "IAP workshops provide a valuable chance to explore emerging research topics with a focused group of participants, and without all the time/effort of a full-scale conference. Instead of rushing from talk to talk, you can slow down and dive deep into a few topics with experts in the field."
Vishal Shrivastav, PhD Student, Cornell University, Best Poster Award Winner, “Attending the IAP workshop was a great experience and very rewarding. I really enjoyed the many amazing talks from both the industry and academia. My personal conversations with several industry leaders at the workshop will definitely guide some of my future research."
Ana Klimovic, Research Scientist at Google Brain, “I have attended three IAP workshops and I am consistently impressed by the quality of the talks and the breadth of the topics covered. These workshops bring top-tier industry and academia together to discuss cutting-edge research challenges. It is a great opportunity to exchange ideas and get inspiration for new research opportunities."
Dr. Pankaj Mehra, VP of Product Planning, Samsung, "Terrific job organizing the Workshop that gave all parties -- students, faculty, industry -- something worthwhile to take back."
Dr. Richard New, VP Research, Western Digital, “IAP workshops provide a great opportunity to meet with professors and students working at the cutting edge of their fields. It was a pleasure to attend the event – lots of very interesting presentations and posters.”