I have been involved in several efforts to integrate static code analysis into software projects—none have been terribly successful. Most have resulted in hours of time spent identifying and removing false positives. So, when I read Travis Smith's recent post about Fallible static code analysis, I was immediately struck with the need to add my two bits. First, let me say I fully support Travis' point. Developers (especially for mobile apps) can do a lot to improve software security. Using the tool to find embedded credentials is an excellent use case. Static code analysis is one of many tools that can help.
Analyze early and often
I think Travis missed a key point in using any tool or technique to produce more secure code/products. The point is that the most successful secure coding initiatives are part of the software development lifecycle (SDLC) of a product from the very beginning. I have had the pleasure of working on multiple projects where security features and processes were primary concerns. In each of those projects, the early investment in secure coding was a huge success. Even in those projects, the belated attempts to integrate static code analysis were unsustainable. I have yet to work on a project where static code analysis was a part of the initial SDLC. However, I conjecture that static code analysis that began with the first line of code committed might actually be sustainable. When you wait and then attempt to analyze a large code base, you spend a bunch of time playing catch up eliminating false positives. It doesn't matter whether you are considering security controls in the development process (like static code analysis and threat modeling) or security features (like authentication and authorization). In my experience, the most successful and cost-effective security initiatives are the ones that are baked into your product from the beginning. In order to produce more secure code, we need to be designing security in and delivering it in the first iterations of a project. We need to start projects off using tools like static code analysis instead of waiting.
Tune it
Another issue with some of the static code analysis projects I have worked on is the lack of tuning. For example, my first experience with static code analysis was applying the FindBugs plugin in my IDE to a large codebase. I immediately lost interest in advocating for static code analysis as part of our build process. I was simply overwhelmed by the sheer volume of issues. In another case, a customer sent me the results of running a static code analysis tool against our product. They had little info to tune the tool with. Consequently, the results were useless to us with respect to improving our security. (In this case, the real value was leverage for the customer during contract negotiations.) I have since learned that the static code analysis tools need to be tuned up front. The approach of running the tool and then trying to figure out what's what will likely be overwhelming. If you prepare by learning the tool and tuning it, you can eliminate a lot of false positives and save yourself a lot of time and frustration. You might not care about some of the coding conventions that the tools can identify, but by focusing on achievable goals like identifying embedded database credentials or finding potential XSS issues, you can greatly improve the chances of successfully incorporating static code analysis into your SDLC. Tools generally classify the kinds of issues they identify, so start off by simply scoping your analysis to a security category.
Do It Continuously
Whichever static code analysis tool you choose, make it part of your continuous integration process and part of daily life for developers. It requires care and feeding to be a useful component of your secure SDLC. And remember what Travis concluded with:
"The key takeaway is to use the tools available to find any defect you can – for even if you don’t, your adversaries will."
Static code analysis tools may produce a lot of data – a determined attacker will wade through it all even if you won't. What you don't know may very well hurt you.