Abstract
Software verification is the concept of determining, given an input program and a specification, whether the input program satisfies the specification or not. There are different strategies that can be used to approach the problem of software verification, but, according to comparative evaluations, none of the known strategies is superior over the others. Therefore, many tools for software verification leave the choice of which strategy to use up to the user, which is problematic because the user might not be an expert on strategy selection. In the past, several learning-based approaches were proposed in order to perform the strategy selection automatically. This automatic choice can be formalized by a strategy selector, which is a function that takes as input a model of the given program, and assigns a verification strategy. The goal of this paper is to identify a small set of program features that (1) can be statically determined for each input program in an efficient way and (2) sufficiently distinguishes the input programs such that a strategy selector for picking a particular verification strategy can be defined that outperforms every constant strategy selector. Our results can be used as a baseline for future comparisons, because our strategy selector is simple and easy to understand, while still powerful enough to outperform the individual strategies. We evaluate our feature set and strategy selector on a large set of 5 687 verification tasks and provide a replication package for comparative evaluation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
This has the advantage that the performance difference is not caused by the use of different programming languages, parser frontends, SMT solvers, libraries, but by the conceptual difference of the strategy (better internal validity). While it would be technically easy to extend the set of available strategies to other software verifiers, we already obtain promising results by just using different CPAchecker strategies.
- 5.
- 6.
KI is, strictly speaking, already a composition, because it uses bounded model checking (BMC) [14] as a component.
- 7.
CEGAR is the abbreviation for counterexample-guided abstraction refinement [16].
- 8.
- 9.
- 10.
- 11.
References
Apel, S., Beyer, D., Friedberger, K., Raimondi, F., Rhein, A.v.: Domain types: Abstract-domain selection based on variable usage. In: Proc. HVC. LNCS, vol. 8244, pp. 262–278. Springer (2013). http://www.sosy-lab.org/~dbeyer/Publications/2013-HVC.Domain_Types_Abstract-Domain_Selection_Based_on_Variable_Usage.pdf
Ball, T., Bounimova, E., Kumar, R., Levin, V.: SLAM2: Static driver verification with under 4% false alarms. In: Proc. FMCAD, pp. 35–42. IEEE (2010). http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5770931
Beyer, D.: Second competition on software verification (Summary of SV-COMP 2013). In: Proc. TACAS. LNCS, vol. 7795, pp. 594–609. Springer (2013). http://dx.doi.org/10.1007/978-3-642-36742-7_43
Beyer, D.: Software verification with validation of results (Report on SV-COMP 2017). In: Proc. TACAS. LNCS, vol. 10206, pp. 331–349. Springer (2017). http://dx.doi.org/10.1007/978-3-662-54580-5_20
Beyer, D., Dangl, M., Wendler, P.: Boosting k-induction with continuously-refined invariants. In: Proc. CAV. LNCS, vol. 9206, pp. 622–640. Springer (2015). http://dx.doi.org/10.1007/978-3-319-21690-4_42
Beyer, D., Dangl, M., Wendler, P.: A unifying view on SMT-based software verification. J. Autom. Reasoning 60(3), 299–335 (2018). http://dx.doi.org/10.1007/s10817-017-9432-6
Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: A technique to pass information between verifiers. In: Proc. FSE, pp. 57:1–57:11. ACM (2012). http://dx.doi.org/10.1145/2393596.2393664
Beyer, D., Henzinger, T.A., Majumdar, R., Rybalchenko, A.: Path invariants. In: Proc. PLDI, pp. 300–309. ACM (2007). http://www.sosy-lab.org/~dbeyer/Publications/2007-PLDI.Path_Invariants.pdf
Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Proc. CAV. LNCS, vol. 6806, pp. 184–190. Springer (2011). http://dx.doi.org/10.1007/978-3-642-22110-1_16
Beyer, D., Keremoglu, M.E., Wendler, P.: Predicate abstraction with adjustable-block encoding. In: Proc. FMCAD, pp. 189–197. FMCAD (2010). http://www.sosy-lab.org/~dbeyer/Publications/2010-FMCAD.Predicate_Abstraction_with_Adjustable-Block_Encoding.pdf
Beyer, D., Löwe, S.: Explicit-state software model checking based on CEGAR and interpolation. In: Proc. FASE. LNCS, vol. 7793, pp. 146–162. Springer (2013). http://www.sosy-lab.org/~dbeyer/Publications/2013-FASE.Explicit-State_Software_Model_Checking_Based_on_CEGAR_and_Interpolation.pdf
Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. Int. J. Softw. Tools Technol. Transfer (2017). http://dx.doi.org/10.1007/s10009-017-0469-y
Biere, A., Cimatti, A., Clarke, E.M., Strichman, O., Zhu, Y.: Bounded model checking. Adv. Comput. 58, 117–148 (2003). http://dx.doi.org/10.1016/S0065-2458(03)58003-2
Biere, A., Cimatti, A., Clarke, E.M., Zhu, Y.: Symbolic model checking without BDDs. In: Proc. TACAS. LNCS, vol. 1579, pp. 193–207. Springer (1999). http://dx.doi.org/10.1007/3-540-49059-0_14
Bishop, C., Johnson, C.G.: Assessing roles of variables by program analysis. In: Proc. CSEIT, pp. 131–136. TUCS (2005)
Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement for symbolic model checking. J. ACM 50(5), 752–794 (2003). http://dx.doi.org/10.1145/876638.876643
Clarke, E.M., Kröning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Proc. TACAS. LNCS, vol. 2988, pp. 168–176. Springer (2004). http://dx.doi.org/10.1007/978-3-540-24730-2_15
Czech, M., Hüllermeier, E., Jakobs, M., Wehrheim, H.: Predicting rankings of software verification tools. In: Proc. SWAN, pp. 23–26. ACM (2017). http://dx.doi.org/10.1145/3121257.3121262
Demyanova, Y., Pani, T., Veith, H., Zuleger, F.: Empirical software metrics for benchmarking of verification tools. In: Proc. CAV. LNCS, vol. 9206, pp. 561–579. Springer (2015). http://dx.doi.org/10.1007/978-3-319-21690-4_39
Demyanova, Y., Pani, T., Veith, H., Zuleger, F.: Empirical software metrics for benchmarking of verification tools. Form. Methods Syst. Des. 50(2-3), 289–316 (2017). http://dx.doi.org/10.1007/s10703-016-0264-5
Demyanova, Y., Rümmer, P., Zuleger, F.: Systematic predicate abstraction using variable roles. In: Proc. NFM. LNCS, vol. 10227, pp. 265–281 (2017). http://dx.doi.org/10.1007/978-3-319-57288-8_18
Demyanova, Y., Veith, H., Zuleger, F.: On the concept of variable roles and its use in software analysis. In: Proc. FMCAD, pp. 226–230. IEEE (2013). http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6679414
Gurfinkel, A., Albarghouthi, A., Chaki, S., Li, Y., Chechik, M.: Ufo: Verification with interpolants and abstract interpretation (competition contribution). In: Proc. TACAS. LNCS, vol. 7795, pp. 637–640. Springer (2013). http://dx.doi.org/10.1007/978-3-642-36742-7_52
Huberman, B.A., Lukose, R.M., Hogg, T.: An economics approach to hard computational problems. Science 275(7), 51–54 (1997). http://www.hpl.hp.com/research/idl/papers/EconomicsApproach/EconomicsApproach.pdf
Lal, A., Qadeer, S., Lahiri, S.K.: A solver for reachability modulo theories. In: Proc. CAV. LNCS, vol. 7358, pp. 427–443. Springer (2012). http://dx.doi.org/10.1007/978-3-642-31424-7_32
Müller, P., Peringer, P., Vojnar, T.: Predator hunting party (competition contribution). In: Proc. TACAS. LNCS, vol. 9035, pp. 443–446. Springer (2015)
Nori, A.V., Rajamani, S.K., Tetali, S., Thakur, A.V.: The yogiproject: Software property checking via static analysis and testing. In: Proc. TACAS. LNCS, vol. 5505, pp. 178–181. Springer (2009). http://dx.doi.org/10.1007/978-3-642-00768-2_17
Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976). http://dx.doi.org/10.1016/S0065-2458(08)60520-3
Sajaniemi, J.: An empirical analysis of roles of variables in novice-level procedural programs. In: Proc. HCC, pp. 37–39. IEEE (2002). http://dx.doi.org/10.1109/HCC.2002.1046340
Sherman, E., Dwyer, M.B.: Structurally defined conditional data-flow static analysis. In: Beyer, D., Huisman, M. (eds.) Proc. TACAS, Part II. LNCS, vol. 10806, pp. 249–265. Springer (2018). http://dx.doi.org/10.1007/978-3-319-89963-3_15
Stieglmaier, T.: Augmenting predicate analysis with auxiliary invariants. Master’s Thesis, University of Passau, Software Systems Lab (2016). https://www.sosy-lab.org/research/msc/stieglmaier
Tulsian, V., Kanade, A., Kumar, R., Lal, A., Nori, A.V.: MUX: Algorithm selection for software model checkers. In: Proc. MSR. ACM (2014). https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/msr14.pdf
van Deursen, A., Moonen, L.: Type inference for COBOL systems. In: Proc. WCRE, pp. 220–230. IEEE (1998)
van Deursen, A., Moonen, L.: Understanding COBOL systems using inferred types. In: Proc. IWPC, pp. 74–81. IEEE (1999). http://dx.doi.org/10.1109/WPC.1999.777746
Wendler, P.: CPAchecker with sequential combination of explicit-state analysis and predicate analysis (competition contribution). In: Proc. TACAS. LNCS, vol. 7795, pp. 613–615. Springer (2013). http://dx.doi.org/10.1007/978-3-642-36742-7_45
Wonisch, D., Wehrheim, H.: Predicate analysis with block-abstraction memoization. In: Proc. ICFEM. LNCS, vol. 7635, pp. 332–347. Springer (2012). http://dx.doi.org/10.1007/978-3-642-34281-3_24
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Beyer, D., Dangl, M. (2018). Strategy Selection for Software Verification Based on Boolean Features. In: Margaria, T., Steffen, B. (eds) Leveraging Applications of Formal Methods, Verification and Validation. Verification. ISoLA 2018. Lecture Notes in Computer Science(), vol 11245. Springer, Cham. https://doi.org/10.1007/978-3-030-03421-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-03421-4_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03420-7
Online ISBN: 978-3-030-03421-4
eBook Packages: Computer ScienceComputer Science (R0)