Computer Science Talks and Events

Tuesday, July 28, 2020 - Virtual Event

"Attack, Defend, Steal: Student-Led Research Projects in Adversarial Machine Learning"

Machine learning systems are constantly making decisions based on data. They decide what ads you see when you visit a webpage, whether or not your credit card transactions are flagged as fraudulent, and whether a self-driving car accelerates or brakes—all without humans in the loop. Hackers can (and do!) manipulate data to trick these systems. Adversarial Learning researchers try to make life harder for these hackers. Assistant Professor of Computer Science Scott Alfeld will discuss prior research projects on adversarial learning led by his students. These include attacks against learning systems, methods of hardening learners against attackers, and sneakily stealing data from sequential learners. This discussion will be moderated by Beitzel Professor in Technology and Society (Statistics and Data Science) Nicholas Horton.

Scott Alfeld, assistant professor of computer science, teaches topics in AI, machine learning and security, ranging from the highly practical to the purely theoretical. Alfeld aims to span the same spectrum in teaching introductory courses as well. Alfeld graduated from the University of Utah and earned both a M.S. and Ph.D. from the Department of Computer Sciences at the University of Wisconsin-Madison.

Alfeld’s primary research is at the intersection of machine learning and security. They study settings where an intelligent adversary has limited access to perturb data fed into a learned or learning system. The goal of this research is two-fold: to detect attacks and to build/augment learning systems to be more robust to undetected attacks. In addition, they develop methods for inferring properties of the underlying sensors (whether trustworthy or not) and incorporating that knowledge into the data analysis pipeline.

Before coming to Amherst, Alfeld taught computer science and public speaking/debate professionally in Salt Lake City, Utah, and Madison, Wis. As a volunteer, they gave guest lectures for courses from the Wisconsin Center for Academically Talented Youth (a nonprofit organization offering courses for advanced students in grades 2 through 12) and taught locksport (recreational lockpicking and related physical security topics) through Sector67 in Madison.

Nicholas Horton, the Beitzel Professor in Technology and Society (Statistics and Data Science), teaches a variety of courses in statistics, data science and related fields, including probability, mathematical statistics, regression and design of experiments. Horton is passionate about improving quantitative and data literacy for students with a variety of backgrounds as well as engagement and mastery of higher-level concepts and capacities to undertake research. Horton graduated from Harvard College and earned a Sc.D. from the Harvard School of Public Health.

Horton has won a number of teaching awards, including the Undergraduate Teaching Award from the Boston Chapter of the American Statistical Association in 2018, the Robert V. Hogg Award For Excellence in Teaching Introductory Statistics from the Mathematical Association of America in 2015, and the Journal of Statistics Education award for best paper in 2011.

As an applied biostatistician and data scientist, Horton’s work is based squarely within the mathematical, statistical and computational sciences, but spans other fields in order to ensure that research is conducted on a sound footing. The real-world data problems that these investigators face often require the use of novel solutions and approaches, since existing methodology is sometimes inadequate. Bridging the gap between theory and practice in interdisciplinary statistics and data science settings is often a challenge, and has been a particular focus of Horton’s work.

Monday, November 18, 2019

3:30 pm in Science Center A131 with refreshments in C209 at 3:00 pm

Conrad Kuklinsky ‘21 & Matteo Riondato

Learning intersections of halfspaces: novel VC-dimension bounds

Abstract: A key question in machine learning research is understanding the trade-off between the size of the training set and the accuracy of the classification function learned by the algorithm. This trade-off can be fully characterized by a single quantity: the VC-dimension of the family of functions that the algorithm may learn. Beautifully combinatorial in nature, the VC-dimension is elusive to compute exactly, but upper bounds to it are sufficient to understand the trade-off. In this talk, we report on our recent results on improved upper bounds to the VC-dimension of intersections of half-spaces in high dimensions, a very popular class of functions. We show a novel connection with convex polytopes and with planar graphs. All the terms and results will be explained without assuming any specific background in the audience. 

Monday, November 4, 2019

3:30 pm in Science Center A131 with refreshments in C209 at 3:00 pm

Lee Spector, Amherst College

Evolutionary Computation

In the same 1950 article in which Alan Turing described his "imitation game" test for artificial intelligence, he also described ways in which ideas from evolutionary biology might help us to develop AI. It took time for these ideas to be refined, and it took advances in computing infrastructure for them to bear fruit, but now "evolutionary computation" methods are solving scientific and engineering problems that are beyond the reach of other forms of AI.

In this talk, I will introduce the general concepts of evolutionary computation and illustrate some of its applications. I will also describe a contribution to the field that my students and I have recently made, demonstrating that the speed and success of adaptation can be boosted by using random sequences of challenges, rather than overall performance, as the basis for parent selection in evolving populations. This approach increases the problem-solving power of evolutionary computation, and it also raises broader questions about the role of specialists in communities and in evolution.

Bio: Lee Spector is a Visiting Professor of Computer Science at Amherst College, a Professor of Computer Science at Hampshire College, and an Adjunct Professor in the College of Information and Computer Sciences at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College, a Ph.D. in Computer Science from the University of Maryland, College Park, and the highest honor bestowed by the National Science Foundation for excellence in both teaching and research, the NSF Director's Award for Distinguished Teaching Scholars. His areas of teaching and research include evolutionary computation, quantum computation, and intersections of computer science, cognitive science, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer).

Monday, October 28, 2019

3:30 pm in Science Center A131 with refreshments in C209 at 3:00 pm

Deby Katz, Carnegie Mellon University

Ensuring Software Quality in Complex Settings

Abstract:  Software runs many things in our lives and our society. It's important that software running vital systems works as intended, but ensuring that software works as intended can be a surprisingly difficult task. In this talk, I'll introduce some of the techniques that software researchers and professionals use to ensure software quality. I'll also examine some well-known software failures: why they happened and how they were missed. I'll discuss some of my work, including work with finding bugs in robotics and autonomous vehicle software.

Bio:  Deby Katz is a senior Ph.D. student in Carnegie Mellon University's Computer Science Department working with Professor Claire Le Goues on the applications of low-level analysis in software engineering. I also collaborate with Professor Philip Koopman and roboticists on robotics applications for software quality techniques. My research interests include applying dynamic binary instrumentation to software testing and automated bug repair; using low-level software engineering techniques in the domains of robotics and autonomous vehicles; and new approaches to decompilation.

I graduated from Amherst College in 2004, majoring in Computer Science. I graduated from New York University School of Law in 2007, with a J.D. I worked for several years as an intellectual property litigator in New York before returning to computer science.

I am a world traveler, having visited all seven continents. I enjoy theater, knitting, and cooking.

Monday, October 21, 2019

3:30 pm in Science Center A131 with refreshments in C209 at 3:00 pm

Emily Griffen, Loeb Center for Career Exploration and Planning, Amherst College

Preparing for Careers in Technology with the Loeb Center

Abstract:  The field of technology is growing and changing every day, meaning career opportunities are plentiful, but sometimes hard to navigate. How do you translate your academic computer science work into a resume that tech recruiters will respond to? What are the best sources for internships and jobs? How do you prepare for a coding interview? What kind of roles are there beyond software engineering?  Emily Griffen, director of the Loeb Center for Career Exploration and Planning, will answer these questions and more and give you concrete action items and resources to help you tackle the career development process in this field. This talk will be especially helpful for interested first-year students and newly declared CS majors.

Monday, September 30, 2019

3:30 pm in Science Center A131 with refreshments in C209 at 3:00 pm

Daniela Hurtado Lange, Georgia Tech

Optimal resource allocation in data center networks: Drift method and transform techniques

 Abstract: Several queueing networks arise from cloud computing, such as load balancing systems, ad hoc wireless networks, input-queued switches, etc. However, analyzing them in a general setting can be intractable. Therefore, we analyze queueing asymptotics to get insights about their behavior. In this talk, I present heavy-traffic analysis in the context of different queueing systems using the Drift method. In particular, I will show how it can be used to compute the distribution of queue lengths in systems that satisfy the Complete Resource Pooling condition and its limitations to analyze queueing systems that do not satisfy such conditions.

Bio: Daniela is a 3rd year Ph.D. student at Georgia Tech, working with Prof. Siva Theja Maguluri. Her interests are in queueing theory and applied probability.

Monday, September 23, 2019

3:30 pm in Science Center A131 with refreshments in C209 at 3:00 pm

Tina Eliassi-Rad, Network Science Institute, Northeastern University

Just Machine Learning

Abstract: Tom Mitchell in his 1997 Machine Learning textbook defined the well-posed learning problem as follows: “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” In this talk, I will discuss current tasks, experiences, and performance measures as they pertain to fairness in machine learning. The most popular task thus far has been risk assessment. For example, Jack’s risk of defaulting on a loan is 8, Jill’s is 2; Ed’s risk of recidivism is 9, Peter’s is 1. We know this task definition comes with impossibility results (e.g., see Kleinberg et al. 2016, Chouldechova 2016). I will highlight new findings in terms of these impossibility results. In addition, most human decision-makers seem to use risk estimates for efficiency purposes and not to make fairer decisions. The task of risk assessment seems to enable efficiency instead of fairness. I will present an alternative task definition whose goal is to provide more context to the human decision-maker. The problems surrounding experience have received the most attention. Joy Buolamwini (MIT Media Lab) refers to these as the “under-sampled majority” problem. The majority of the population is non-white, non-male; however, white males are overrepresented in the training data. Not being properly represented in the training data comes at a cost to the under-sampled majority when machine learning algorithms are used to aid human decision-makers. There are many well-documented incidents here; for example, facial recognition systems have poor performance on dark-skinned people. In terms of performance measures, there are a variety of definitions here from group to individual-fairness, from anti-classification to classification parity, to calibration. I will discuss our null model for fairness and demonstrate how to use deviations from this null model to measure favoritism and prejudice in the data.

Bio: Tina Eliassi-Rad is an Associate Professor of Computer Science at Northeastern University in Boston, MA. She is also a core faculty member at Northeastern University's Network Science Institute. Prior to joining Northeastern, Tina was an Associate Professor of Computer Science at Rutgers University; and before that, she was a member of Technical Staff and Principal Investigator at Lawrence Livermore National Laboratory. Tina earned her Ph.D. in Computer Sciences (with a minor in Mathematical Statistics) at the University of Wisconsin-Madison. Her research is rooted in data mining and machine learning; and spans theory, algorithms, and applications of big data from networked representations of physical and social phenomena. She has over 80 peer-reviewed publications (including a few best paper and best paper runner-up awardees); has given over 190 invited talks and 13 tutorials. Tina's work has been applied to personalized search on the World-Wide Web, statistical indices of large-scale scientific simulation data, fraud detection, mobile ad targeting, cyber situational awareness, and ethics in machine learning. Her algorithms have been incorporated into systems used by the government and industry (e.g., IBM System G Graph Analytics) as well as open-source software (e.g., Stanford Network Analysis Project). In 2017, she served as the program co-chair for the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (a.k.a. KDD, which is the premier conference on data mining) and as the program co-chair for the International Conference on Network Science (a.k.a. NetSci, which is the premier conference on network science). In 2010, she received an Outstanding Mentor Award from the Office of Science at the US Department of Energy. For more details, visit 

Monday, September 16, 2019

3:30 pm in Science Center A131 with refreshments in C209 at 3:00 pm

Scott Alfeld

Amherst College

Undergraduate Research in Adversarial Learning

Abstract:  With the growing use of data in decision-making systems, new security vulnerabilities have arisen. Attacks (whether propaganda campaigns orchestrated through social model, corporations “cooking the books” to manipulate prices, or glasses frames that trick facial recognition software) are having an ever-increasing effect on our world. In this talk, I’ll present a broad introduction to the field of Adversarial Learning — the study of using machine learning when the input data may be corrupted by an attacker. We’ll get our hands dirty constructing various data manipulation attacks against simplified models. I’ll then cover some of my recent work with students ranging from detecting and classifying fake news online to preventing the manipulation of stock prices to injecting Trojan horses into the computer chip design process.

 Bio:  Scott Alfeld is an Assistant Professor of Computer Science at Amherst College. His research is at the intersection of machine learning and security, centered on using data analysis techniques in the presence of intelligent adversaries. More broadly, his work focuses on performing statistical inference when the source of data is a diverse set of (potentially adversarial) agents or sensors with unknown relationships to one another. Outside of academia, Scott is a wildlife and astronomy enthusiast and volunteers as a locksport instructor.

Computer Science department events are added to this space as they are scheduled. 

Check back often!