Baris Kasikci earns CAREER Award to automatically improve software quality with data from everyday program use
Kasikci will sift through the byproducts of hundreds of millions of common program executions to determine how this data can automate some key steps in bug finding and fixing.
Prof. Baris Kasikci has earned an NSF CAREER Award to devise a system that can automate some of the most difficult steps in tracking down bugs in software. By collecting and analyzing data already being produced by commonly used software, Kasikci has proposed a method to learn key features of program bugs and errors that can automate their detection in the future.
People execute their most-used programs many times every day – operating systems on their phones and laptops, web browsers, and all the applications that they use for work or fun. Most of the time these programs generate a variety of data byproducts, like standard logs about their execution or crash reports when something goes wrong.
Currently, nearly all of this info is dumped and forgotten.
“That information is mostly going to waste,” says Kasikci. “Developers might use it to do some manual debugging, but it’s not a ‘hive mind’ where you learn from millions of computers and use it to make decisions at a high level.”
That gap is what Kasikci’s lab is working to address. In the coming months, they’ll be sifting through the byproducts of the hundreds of millions of program executions that occur every day in order to determine how it could help improve the way our software runs. That will involve filtering out a lot of noise and identifying data that says the most about how a program ran.
“These systems are already gathering these little bits and pieces of information,” Kasikci says. “We’re trying to determine how you can use that info to improve the quality of your software.”
Through this work, Kasikci hopes to provide an answer to a long-open question in the field of debugging: what’s the bare minimum amount of actual execution data developers need to be able to fix a program?
This question arises from the way debugging is typically done. The general approach to testing a program is called exploring its state space. The developer wants to test all the different states a program can reach in order to determine whether any of them produces an error, but modern software is far too complex to ever explore exhaustively. Instead, developers have to determine which states to test. Narrowing it down can be tricky.
In the field, practitioners are divided between two camps on the best way go about this analysis – static and dynamic. Static analysis involves studying the code itself, using knowledge of the language to determine where something is likely to go wrong. Dynamic analysis, on the other hand, actually executes the program and studies its outcomes. The different methods in use exist somewhere along this spectrum.
Either extreme brings its own shortcomings. Static analysis, Kasikci says, tends to sacrifice accuracy for better efficiency or privacy.
“Because you’re not actually running the program,” he explains, “you’re lacking real execution context and information. You might think there’s a particular bug in a program, but you may be overlooking the fact that the program never actually gets to that point.”
Dynamic analysis, on the other hand, can provide so complete a picture of a program’s functioning that developers can basically hit replay on its execution. This, however, comes at a big cost in speed and invasiveness.
Using the massive trove of data that is already being recorded by most software, Kasikci intends to determine the exact point where that tradeoff is minimal in both directions.
“There is certain crucial information that, if we were to record it, would drastically improve the accuracy of static analysis,” he says. “We want to know the minimum amount of information we can record dynamically so that we can do most of the work statically, offline, in a way that’s much more accurate than is currently being done.”
In the end, determining the most helpful kind of data from a program’s execution could lead to an automated tool to determine which of a program’s states are safe and which are likely to produce bugs, vastly cutting down the manual work for developers.
Kasikci says his work will be focused on practical applications, with the hope that his results can be incorporated directly into an existing open source or commercial error analysis software.
“The goal is to really have a boost in developer productivity,” he says. “If you’re able to recreate failures and reason about bugs with minimally invasive techniques, then developers can quickly solve these issues without having to go through pages and pages of false reports.”
As a second component of the CAREER Award, Kasikci plans to design a course aimed to teach basic principles of debugging to middle school students. These principles, Kasikci says, are currently not well explored in CS education, especially at earlier levels.
“At the end of the day, debugging is reasoning about issues in your program and fixing them,” he says. “There’s a critical thinking aspect behind it, and I think it’s important to teach as early as possible.”
The process of debugging takes on average over 50% of development time in production systems, according to Kasikci.
The course will make use of Microsoft’s educational Microbit platform, and will be piloted at the Qualcomm Thinkabit Lab within the Michigan Engineering Zone (MEZ) in Detroit. The MEZ is headquartered at the University of Michigan’s Detroit Center, and offers resources like courses and lab space for engineering education geared towards underrepresented and underserved minorities in Southeast Michigan. The course will also be offered online through Microsoft’s Makecode website.